Akamas Introduces HPA-Aware Optimization and Expands Its Autonomous Kubernetes Capabilities at KubeCon EMEA 2026

Share this post

Akamas Introduces HPA-Aware Optimization and Expands Its Autonomous Kubernetes Capabilities at KubeCon EMEA 2026

Milan, Italy – April 2026

Akamas, the autonomous and continuous optimization company, today announced major enhancements to its Kubernetes optimization platform at KubeCon + CloudNativeCon EMEA 2026 in Amsterdam, introducing native support for Horizontal Pod Autoscaler (HPA) workloads alongside expanded capabilities for cluster autoscaling, Node.js runtimes, and GitOps integration.

With this release, Akamas extends its full-stack optimization approach to the most dynamic and business-critical workloads in modern cloud-native environments.

HPA-Aware Optimization: Scaling from the Right Foundation

Horizontal Pod Autoscaler (HPA) and KEDA are widely adopted mechanisms for scaling Kubernetes workloads. They are powerful tools, but autoscaling alone does not guarantee efficiency.

In practice, many HPA-managed services scale from poorly configured resource baselines. When CPU and memory requests are misaligned with real workload behavior, scaling can amplify inefficiencies instead of solving them. Teams often experience over-provisioned replicas, slow scale-ups caused by CPU throttling, or replica limits that cannot absorb peak traffic. In more subtle cases, frequent scaling combined with complex runtimes leads to instability and performance degradation.

Akamas now brings native optimization to workloads governed by HPA. Rather than modifying scaling thresholds or policies, the platform optimizes the baseline configuration from which scaling occurs. It generates right-sized CPU and memory requests and limits, aligns runtime configurations – including JVM and Node.js – with scaling dynamics, and ensures recommendations do not conflict with existing HPA behavior.

By optimizing the foundation instead of altering scaling rules, Akamas guarantees safe and predictable rollout. HPA continues to operate as intended, but from a more efficient and stable starting point.

The platform also introduces detection logic specific to autoscaled workloads. It can surface scaling thresholds that are set too high, identify insufficient max replica limits, detect slow scale-ups caused by throttling, and score overall scaling efficiency across HPA-managed services. Runtime-specific overheads, such as JVM warm-up and JIT compilation, are incorporated into the recommendation engine to prevent cold-start penalties and favor stable, high-performance pod configurations over aggressive but inefficient scaling patterns.

“Autoscaling determines when to scale,” said Stefano Doni, CTO of Akamas. “Akamas determines what each pod should look like before scaling begins. That distinction is what turns amplifying inefficiencies to scaling efficiently.”

Extending Optimization Across the Kubernetes Stack

Alongside HPA integration, Akamas continues to expand its Kubernetes coverage.

At the infrastructure layer, enhanced support for Cluster Autoscaler optimization allows platform teams to analyze real workload behavior and identify unbalanced scaling or stranded capacity at the node level. By recommending optimal node instance types and quantifying cost impact before changes are applied, Akamas helps organizations improve cluster efficiency without compromising reliability.

At the runtime layer, Akamas is extending its optimization intelligence to Node.js applications running on Kubernetes. Many Node.js services operate with default memory and runtime configurations that were never designed for containerized, resource-constrained environments. Akamas now provides data-driven recommendations that align runtime behavior with Kubernetes resource limits, reducing performance variability and reliability risks in modern JavaScript workloads.

To operationalize optimization safely, Akamas also introduces deeper GitOps integration. Recommendations can be delivered as reviewable merge requests, allowing teams to apply configuration changes through their existing CI/CD pipelines with full traceability and governance. This embeds optimization into standard engineering workflows rather than treating it as a separate, manual initiative.

Toward Autonomous, Full-Stack Kubernetes Optimization

With HPA-aware optimization, cluster autoscaler support, runtime intelligence, and Git-based workflows, Akamas strengthens its position as a full-stack Kubernetes optimization platform.

Instead of treating performance tuning, scaling efficiency, and cost control as isolated activities, Akamas enables organizations to optimize across application runtimes, pod configuration, and infrastructure layers within a unified, policy-driven framework.

Akamas is demonstrating these expanded capabilities live at KubeCon EMEA 2026 in Amsterdam.

About Akamas

Akamas is the autonomous and continuous optimization company.
Built on patented, research-driven AI technology, Akamas enables organizations to optimize reliability, performance, and cost across the full software stack – from application runtimes to Kubernetes infrastructure.

By transforming configuration into a data-driven, policy-driven platform capability, Akamas helps engineering teams reduce waste, prevent incidents, and move toward autonomous optimization at scale.