Practically, Go 1.25's Container-Aware GOMAXPROCS: What You Need to Know
Go 1.25 just dropped with expected changes to GOMAXPROCS, which significantly change how Go applications behave in containerized environments. The runtime now automatically detects and respects container CPU limits when setting GOMAXPROCS. This isn’t just a minor improvement—it’s a shift that may dramatically improve performance for millions of containerized Go applications.
And this isn’t the only amazing change, but this is the one I’ll focus on in this post.
The Problem That Plagued Go for Years
Before Go 1.25, there was a fundamental mismatch between Go’s runtime and containerized environments:
# Your Kubernetes pod with 2 CPU cores
resources:
limits:
cpu: "2"
# But Go saw the entire host machine
runtime.NumCPU() // Returns 8 (host CPUs)
runtime.GOMAXPROCS(0) // Also 8 - ignoring your 2-core limit!
This led to:
- Over-scheduling: Go created 8 goroutines for CPU-bound work on a 2-core container
- Context switching overhead: Excessive goroutine switching degraded performance
- Resource contention: Multiple containers fighting for the same CPU cores
- Manual workarounds: Developers had to manually set
GOMAXPROCSeverywhere
Popular libraries like uber-go/automaxprocs existed solely to fix this fundamental issue, showing how widespread the problem was.
What Changed in Go 1.25
Go 1.25 introduces two features:
1. Container-Aware GOMAXPROCS (Linux)
The runtime now reads cgroup CPU bandwidth limits and automatically sets GOMAXPROCS accordingly:
// On a container with 2 CPU cores limit:
runtime.NumCPU() // Still returns 8 (host CPUs)
runtime.GOMAXPROCS(0) // Now returns 2 (respects container limit!)
2. Dynamic GOMAXPROCS Updates
The runtime periodically updates GOMAXPROCS if CPU availability changes:
// Your application automatically adapts to:
// - Kubernetes horizontal pod autoscaling
// - Container CPU limit changes
// - Node CPU availability changes
Seeing It in Action
Let me show you the difference with a practical example. Here’s a test program that reveals the new behavior:
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Printf("Go version: %s\n", runtime.Version())
fmt.Printf("Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
// Monitor for changes (Go 1.25 feature)
for i := 0; i < 10; i++ {
time.Sleep(5 * time.Second)
fmt.Printf("Time %ds: GOMAXPROCS = %d\n",
(i+1)*5, runtime.GOMAXPROCS(0))
}
}
Kubernetes Results Comparison
Go 1.24 and earlier:
# Pod with 2 CPU cores limit
resources:
limits:
cpu: "2"
Go version: go1.24.0
Host CPUs: 8
GOMAXPROCS: 8 # ❌ Ignores container limit
Go 1.25:
Go version: go1.25.0
Host CPUs: 8
GOMAXPROCS: 2 # ✅ Respects container limit!
Docker Results
Running the same program in Docker with CPU limits:
# Docker with 1.5 CPU cores
docker run --cpus="1.5" golang:1.25 go run main.go
Go version: go1.25.0
Host CPUs: 8
GOMAXPROCS: 2 # ✅ Rounds up fractional CPU limits
Complete Examples and Testing
I’ve created comprehensive examples to test all scenarios. Here are the key ones:
Kubernetes Manifest Examples
# Example 1: CPU-limited container
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-cpu-limited
spec:
template:
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "2" # GOMAXPROCS will be 2
memory: "256Mi"
command: ["go", "run", "/app/main.go"]
# Example 2: Manual override (disables auto-detection)
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-manual
spec:
template:
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "2" # CPU limit is 2
env:
- name: GOMAXPROCS
value: "8" # Manual override - will use 8
command: ["go", "run", "/app/main.go"]
When the Magic Doesn’t Happen
The automatic behavior is disabled in these cases:
1. Manual GOMAXPROCS Setting
// Environment variable
GOMAXPROCS=8 go run main.go
// Or in code
runtime.GOMAXPROCS(8)
2. GODEBUG Overrides
# Disable container awareness
GODEBUG=containermaxprocs=0 go run main.go
# Disable dynamic updates
GODEBUG=updatemaxprocs=0 go run main.go
# Disable both
GODEBUG=containermaxprocs=0,updatemaxprocs=0 go run main.go
3. Non-Linux Platforms
Currently, container-aware GOMAXPROCS only works on Linux (where cgroups exist).
Migration Guide for Existing Applications
1. Remove Manual GOMAXPROCS Workarounds
Before (Go < 1.25):
import (
_ "go.uber.org/automaxprocs" // Popular workaround library
)
func init() {
// Manual calculation based on container limits
if cpuLimit := os.Getenv("CPU_LIMIT"); cpuLimit != "" {
if limit, err := strconv.Atoi(cpuLimit); err == nil {
runtime.GOMAXPROCS(limit)
}
}
}
After (Go 1.25):
// Just delete all the manual workarounds!
// Go handles it automatically now
2. Test Thoroughly
Since this changes fundamental runtime behavior, comprehensive testing is critical:
# Test your application with various CPU limits
docker run --cpus="1" myapp:go1.25
docker run --cpus="2" myapp:go1.25
docker run --cpus="4" myapp:go1.25
# Monitor performance metrics
kubectl top pods
3. Monitor Key Metrics
Watch these metrics after upgrading:
- CPU utilization efficiency
- Response times
- Context switch rates
- Memory usage patterns
- Goroutine counts
Cloud Platform Compatibility
This change affects major cloud platforms:
AWS
- ECS Fargate: ✅ Respects CPU limits
- EKS: ✅ Works with pod CPU limits
- Lambda: ✅ May improve performance in custom runtimes
Google Cloud
- Cloud Run: ✅ Automatically optimizes for allocated CPU
- GKE: ✅ Full Kubernetes support
- Cloud Functions: ✅ Benefits custom Go runtimes
Azure
- Container Instances: ✅ Respects CPU limits
- AKS: ✅ Full Kubernetes support
- Functions: ✅ Helps custom Go runtimes
Performance Tuning Considerations
When Automatic is Perfect
- Standard web applications: REST APIs, microservices
- Data processing pipelines: ETL jobs, stream processing
- Background workers: Queue processors, schedulers
When Manual Tuning May Be Better
- CPU-intensive algorithms: May need fine-tuning for specific workloads
- Memory-bound applications: GOMAXPROCS might not be the bottleneck
- Legacy applications: With complex custom scheduling logic
Testing Your Specific Workload
func benchmarkWithDifferentGOMAXPROCS() {
for _, procs := range []int{1, 2, 4, 8} {
runtime.GOMAXPROCS(procs)
start := time.Now()
// Your workload here
runYourWorkload()
fmt.Printf("GOMAXPROCS=%d: %v\n", procs, time.Since(start))
}
}
Troubleshooting Common Issues
Issue 1: Performance Regression
Symptoms: Application slower after upgrading to Go 1.25 Solution:
# Temporarily disable to confirm
GODEBUG=containermaxprocs=0 go run main.go
# Or use manual setting
GOMAXPROCS=8 go run main.go
Issue 2: GOMAXPROCS Not Changing
Check: Container actually has CPU limits set
# In container, check cgroup limits
cat /sys/fs/cgroup/cpu.max
cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
Issue 3: Fractional CPU Confusion
Remember: Go rounds UP fractional CPU limits
CPU Limit: 1.5 → GOMAXPROCS: 2
CPU Limit: 2.7 → GOMAXPROCS: 3
CPU Limit: 3.9 → GOMAXPROCS: 4
The Bigger Picture: Why This Matters
This change represents a fundamental shift in how Go applications integrate with modern infrastructure:
1. Cloud-Native by Default
Go applications now understand their containerized environment without additional configuration.
2. Better Resource Efficiency
Automatic optimization means better cost efficiency in cloud deployments.
3. Simplified Operations
DevOps teams no longer need to manually tune GOMAXPROCS for every deployment.
4. Platform Consistency
The same application binary adapts to different deployment environments automatically.
Future Implications
This opens doors for even smarter runtime optimizations:
- Memory-aware garbage collection tuning
- Network buffer sizing based on container limits
- Dynamic goroutine pool sizing
- Automatic backpressure mechanisms
Conclusion
Go 1.25’s container-aware GOMAXPROCS is more than just a feature—it’s a fundamental improvement that makes Go applications truly cloud-native by default. The days of manual GOMAXPROCS tuning and third-party workaround libraries are finally over.
Key takeaways:
- ✅ Automatic container CPU limit detection on Linux
- ✅ Dynamic updates when limits change
- ✅ Zero configuration required
- ✅ Significant performance improvements in many cases
- ✅ Backward compatible with manual overrides
If you’re running Go applications in containers (and who isn’t these days?), Go 1.25 should be at the top of your upgrade priority list. Test it thoroughly, but expect to see measurable performance improvements across your containerized Go workloads.
Want to test this yourself? All the examples and code from this post are included below. Try them out in your own Kubernetes cluster or Docker environment to see the magic in action!
Complete Test Code Examples
Kubernetes Manifests
Here are the complete Kubernetes manifests for testing different scenarios:
# kubernetes-manifests.yaml
---
# Example 1: CPU-limited container that will respect limits
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-cpu-limited-1core
labels:
app: go-gomaxprocs-test
scenario: cpu-limited-1core
spec:
replicas: 1
selector:
matchLabels:
app: go-gomaxprocs-test
scenario: cpu-limited-1core
template:
metadata:
labels:
app: go-gomaxprocs-test
scenario: cpu-limited-1core
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "1" # GOMAXPROCS should be 1
memory: "256Mi"
requests:
cpu: "500m"
memory: "128Mi"
command: ["sh", "-c"]
args:
- |
cat > /tmp/main.go << 'EOF'
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Printf("=== Go 1.25 GOMAXPROCS Test (1 CPU limit) ===\n")
fmt.Printf("Go version: %s\n", runtime.Version())
fmt.Printf("Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
fmt.Printf("Expected GOMAXPROCS: 1 (CPU limit)\n\n")
for i := 0; i < 6; i++ {
time.Sleep(10 * time.Second)
fmt.Printf("Time %ds: GOMAXPROCS = %d\n", (i+1)*10, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run main.go
---
# Example 2: CPU-limited container with 2 cores
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-cpu-limited-2core
labels:
app: go-gomaxprocs-test
scenario: cpu-limited-2core
spec:
replicas: 1
selector:
matchLabels:
app: go-gomaxprocs-test
scenario: cpu-limited-2core
template:
metadata:
labels:
app: go-gomaxprocs-test
scenario: cpu-limited-2core
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "2" # GOMAXPROCS should be 2
memory: "512Mi"
requests:
cpu: "1"
memory: "256Mi"
command: ["sh", "-c"]
args:
- |
cat > /tmp/main.go << 'EOF'
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Printf("=== Go 1.25 GOMAXPROCS Test (2 CPU limit) ===\n")
fmt.Printf("Go version: %s\n", runtime.Version())
fmt.Printf("Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
fmt.Printf("Expected GOMAXPROCS: 2 (CPU limit)\n\n")
for i := 0; i < 6; i++ {
time.Sleep(10 * time.Second)
fmt.Printf("Time %ds: GOMAXPROCS = %d\n", (i+1)*10, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run main.go
---
# Example 3: Manual GOMAXPROCS override (should ignore container limits)
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-manual-override
labels:
app: go-gomaxprocs-test
scenario: manual-override
spec:
replicas: 1
selector:
matchLabels:
app: go-gomaxprocs-test
scenario: manual-override
template:
metadata:
labels:
app: go-gomaxprocs-test
scenario: manual-override
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "1" # CPU limit is 1
memory: "256Mi"
env:
- name: GOMAXPROCS
value: "4" # Manual override - should use 4, not 1
command: ["sh", "-c"]
args:
- |
cat > /tmp/main.go << 'EOF'
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Printf("=== Go 1.25 GOMAXPROCS Test (Manual Override) ===\n")
fmt.Printf("Go version: %s\n", runtime.Version())
fmt.Printf("Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
fmt.Printf("CPU Limit: 1, Manual GOMAXPROCS: 4\n")
fmt.Printf("Expected GOMAXPROCS: 4 (manual override)\n\n")
for i := 0; i < 6; i++ {
time.Sleep(10 * time.Second)
fmt.Printf("Time %ds: GOMAXPROCS = %d\n", (i+1)*10, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run main.go
---
# Example 4: Fractional CPU limits (should round up)
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app-fractional-cpu
labels:
app: go-gomaxprocs-test
scenario: fractional-cpu
spec:
replicas: 1
selector:
matchLabels:
app: go-gomaxprocs-test
scenario: fractional-cpu
template:
metadata:
labels:
app: go-gomaxprocs-test
scenario: fractional-cpu
spec:
containers:
- name: go-app
image: golang:1.25
resources:
limits:
cpu: "1.5" # GOMAXPROCS should be 2 (rounds up)
memory: "256Mi"
requests:
cpu: "750m"
memory: "128Mi"
command: ["sh", "-c"]
args:
- |
cat > /tmp/main.go << 'EOF'
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
fmt.Printf("=== Go 1.25 GOMAXPROCS Test (1.5 CPU limit) ===\n")
fmt.Printf("Go version: %s\n", runtime.Version())
fmt.Printf("Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
fmt.Printf("Expected GOMAXPROCS: 2 (1.5 rounds up to 2)\n\n")
for i := 0; i < 6; i++ {
time.Sleep(10 * time.Second)
fmt.Printf("Time %ds: GOMAXPROCS = %d\n", (i+1)*10, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run main.go
Docker Compose Test Scenarios
Complete Docker Compose setup for testing different CPU limits:
# docker-compose.yaml
version: '3.8'
services:
# Test 1: 1 CPU limit
go-app-1cpu:
image: golang:1.25
container_name: go-1.25-test-1cpu
deploy:
resources:
limits:
cpus: '1.0'
memory: 256M
reservations:
cpus: '0.5'
memory: 128M
command: >
sh -c "
cat > /tmp/test.go << 'EOF'
package main
import (
\"fmt\"
\"runtime\"
\"time\"
)
func main() {
fmt.Printf(\"=== Docker Test: 1.0 CPU Limit ===\n\")
fmt.Printf(\"Go version: %s\n\", runtime.Version())
fmt.Printf(\"Host CPUs: %d\n\", runtime.NumCPU())
fmt.Printf(\"GOMAXPROCS: %d\n\", runtime.GOMAXPROCS(0))
fmt.Printf(\"Expected: 1\n\n\")
for i := 0; i < 12; i++ {
time.Sleep(5 * time.Second)
fmt.Printf(\"[%02d] GOMAXPROCS = %d\n\", i+1, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run test.go
"
# Test 2: 2.5 CPU limit (should round up to 3)
go-app-2-5cpu:
image: golang:1.25
container_name: go-1.25-test-2-5cpu
deploy:
resources:
limits:
cpus: '2.5'
memory: 512M
reservations:
cpus: '1.0'
memory: 256M
command: >
sh -c "
cat > /tmp/test.go << 'EOF'
package main
import (
\"fmt\"
\"runtime\"
\"time\"
)
func main() {
fmt.Printf(\"=== Docker Test: 2.5 CPU Limit ===\n\")
fmt.Printf(\"Go version: %s\n\", runtime.Version())
fmt.Printf(\"Host CPUs: %d\n\", runtime.NumCPU())
fmt.Printf(\"GOMAXPROCS: %d\n\", runtime.GOMAXPROCS(0))
fmt.Printf(\"Expected: 3 (2.5 rounds up)\n\n\")
for i := 0; i < 12; i++ {
time.Sleep(5 * time.Second)
fmt.Printf(\"[%02d] GOMAXPROCS = %d\n\", i+1, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run test.go
"
# Test 3: Manual override
go-app-manual:
image: golang:1.25
container_name: go-1.25-test-manual
environment:
- GOMAXPROCS=8
deploy:
resources:
limits:
cpus: '1.0'
memory: 256M
command: >
sh -c "
cat > /tmp/test.go << 'EOF'
package main
import (
\"fmt\"
\"runtime\"
\"time\"
)
func main() {
fmt.Printf(\"=== Docker Test: Manual Override ===\n\")
fmt.Printf(\"Go version: %s\n\", runtime.Version())
fmt.Printf(\"Host CPUs: %d\n\", runtime.NumCPU())
fmt.Printf(\"GOMAXPROCS: %d\n\", runtime.GOMAXPROCS(0))
fmt.Printf(\"CPU Limit: 1.0, GOMAXPROCS env: 8\n\")
fmt.Printf(\"Expected: 8 (manual override)\n\n\")
for i := 0; i < 12; i++ {
time.Sleep(5 * time.Second)
fmt.Printf(\"[%02d] GOMAXPROCS = %d\n\", i+1, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run test.go
"
# Test 4: GODEBUG disable
go-app-godebug:
image: golang:1.25
container_name: go-1.25-test-godebug
environment:
- GODEBUG=containermaxprocs=0
deploy:
resources:
limits:
cpus: '1.0'
memory: 256M
command: >
sh -c "
cat > /tmp/test.go << 'EOF'
package main
import (
\"fmt\"
\"runtime\"
\"time\"
)
func main() {
fmt.Printf(\"=== Docker Test: GODEBUG Disabled ===\n\")
fmt.Printf(\"Go version: %s\n\", runtime.Version())
fmt.Printf(\"Host CPUs: %d\n\", runtime.NumCPU())
fmt.Printf(\"GOMAXPROCS: %d\n\", runtime.GOMAXPROCS(0))
fmt.Printf(\"CPU Limit: 1.0, GODEBUG=containermaxprocs=0\n\")
fmt.Printf(\"Expected: %d (same as host CPUs)\n\n\", runtime.NumCPU())
for i := 0; i < 12; i++ {
time.Sleep(5 * time.Second)
fmt.Printf(\"[%02d] GOMAXPROCS = %d\n\", i+1, runtime.GOMAXPROCS(0))
}
}
EOF
cd /tmp && go run test.go
"
Interactive Test Program
An interactive program to test GOMAXPROCS behavior in different scenarios:
// gomaxprocs-test.go
package main
import (
"fmt"
"os"
"runtime"
"strconv"
"time"
)
func main() {
fmt.Println("🚀 Go 1.25 Container-Aware GOMAXPROCS Interactive Test")
fmt.Println("=====================================================")
// Display current environment
showEnvironmentInfo()
// Test scenarios
fmt.Println("\n🧪 Running test scenarios...")
testScenario1_DefaultBehavior()
testScenario2_ManualOverride()
testScenario3_MonitorChanges()
}
func showEnvironmentInfo() {
fmt.Printf("\n📊 Environment Information:\n")
fmt.Printf(" Go Version: %s\n", runtime.Version())
fmt.Printf(" GOOS: %s\n", runtime.GOOS)
fmt.Printf(" GOARCH: %s\n", runtime.GOARCH)
fmt.Printf(" Host CPUs: %d\n", runtime.NumCPU())
fmt.Printf(" Current GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
// Check environment variables
if gomaxprocs := os.Getenv("GOMAXPROCS"); gomaxprocs != "" {
fmt.Printf(" GOMAXPROCS env: %s\n", gomaxprocs)
}
if godebug := os.Getenv("GODEBUG"); godebug != "" {
fmt.Printf(" GODEBUG: %s\n", godebug)
}
// Check cgroup information (Linux only)
if runtime.GOOS == "linux" {
checkCgroupInfo()
}
}
func checkCgroupInfo() {
fmt.Printf("\n🐧 Linux cgroup Information:\n")
// Check cgroup v2 CPU limits
if data, err := os.ReadFile("/sys/fs/cgroup/cpu.max"); err == nil {
fmt.Printf(" cpu.max: %s", string(data))
}
// Check cgroup v1 CPU limits
if data, err := os.ReadFile("/sys/fs/cgroup/cpu/cpu.cfs_quota_us"); err == nil {
fmt.Printf(" cfs_quota_us: %s", string(data))
}
if data, err := os.ReadFile("/sys/fs/cgroup/cpu/cpu.cfs_period_us"); err == nil {
fmt.Printf(" cfs_period_us: %s", string(data))
}
}
func testScenario1_DefaultBehavior() {
fmt.Printf("\n🔍 Test 1: Default Behavior\n")
fmt.Printf(" Expected: GOMAXPROCS should respect container CPU limits\n")
original := runtime.GOMAXPROCS(0)
fmt.Printf(" Result: GOMAXPROCS = %d\n", original)
if runtime.GOOS == "linux" {
if original <= runtime.NumCPU() {
fmt.Printf(" ✅ Looks good! GOMAXPROCS ≤ Host CPUs\n")
} else {
fmt.Printf(" ❓ Unexpected: GOMAXPROCS > Host CPUs\n")
}
} else {
fmt.Printf(" ℹ️ Container awareness only works on Linux\n")
}
}
func testScenario2_ManualOverride() {
fmt.Printf("\n🔧 Test 2: Manual Override\n")
fmt.Printf(" Testing manual GOMAXPROCS setting...\n")
original := runtime.GOMAXPROCS(0)
testValue := original + 2
fmt.Printf(" Setting GOMAXPROCS to %d\n", testValue)
runtime.GOMAXPROCS(testValue)
actual := runtime.GOMAXPROCS(0)
fmt.Printf(" Result: GOMAXPROCS = %d\n", actual)
if actual == testValue {
fmt.Printf(" ✅ Manual override works correctly\n")
} else {
fmt.Printf(" ❌ Manual override failed\n")
}
// Restore original
runtime.GOMAXPROCS(original)
fmt.Printf(" Restored to original value: %d\n", original)
}
func testScenario3_MonitorChanges() {
fmt.Printf("\n⏰ Test 3: Monitor for Dynamic Changes\n")
fmt.Printf(" Monitoring GOMAXPROCS for 30 seconds...\n")
fmt.Printf(" (In Go 1.25, this should update if container limits change)\n\n")
for i := 0; i < 6; i++ {
time.Sleep(5 * time.Second)
gomaxprocs := runtime.GOMAXPROCS(0)
fmt.Printf(" [%02ds] GOMAXPROCS = %d\n", (i+1)*5, gomaxprocs)
}
fmt.Printf("\n ℹ️ In a real container environment with changing limits,\n")
fmt.Printf(" you might see GOMAXPROCS change dynamically.\n")
}
Usage Instructions
Running the Kubernetes Tests
# Apply all manifests
kubectl apply -f kubernetes-manifests.yaml
# Check the results
kubectl logs deployment/go-app-cpu-limited-1core
kubectl logs deployment/go-app-cpu-limited-2core
kubectl logs deployment/go-app-manual-override
kubectl logs deployment/go-app-fractional-cpu
# Clean up
kubectl delete -f kubernetes-manifests.yaml
Running the Docker Compose Tests
# Run all test scenarios
docker-compose up
# Run individual tests
docker-compose up go-app-1cpu
docker-compose up go-app-2-5cpu
docker-compose up go-app-manual
docker-compose up go-app-godebug
# Clean up
docker-compose down
Running the Interactive Tests
# Save as gomaxprocs-test.go and run directly
go run gomaxprocs-test.go
# Or in a container with CPU limits
docker run --cpus="2" golang:1.25 sh -c "
cat > test.go << 'EOF'
[paste the gomaxprocs-test.go code here]
EOF
go run test.go
"