Event Type: Webinar

  • We optimized 50,000 Kubernetes pods: Our top 10 lessons learned

    We optimized 50,000 Kubernetes pods: Our top 10 lessons learned

    Learn from our three-year journey optimizing thousands of Kubernetes pods across multiple enterprise environments.

    In this Ignite episode, Stefano Doni and Mauro Pessina share the key lessons they discovered while maximizing application reliability and reducing cluster resource costs, challenging many common assumptions about Kubernetes optimization.

    Watch the recording to learn:

    • Why traditional resource optimization approaches often fall short
    • How application-aware optimization leads to better results
    • Real-world examples of significant cost savings and reliability improvements
    • Practical techniques you can implement in your environment

    Whether you’re managing dozens or thousands of pods, you’ll leave with actionable insights to improve your Kubernetes operations.

  • The performance engineer’s secret co-pilot: Embracing AI-powered tuning at Sabre

    The performance engineer’s secret co-pilot: Embracing AI-powered tuning at Sabre

    Explore the integration of machine learning and automation in performance engineering.

    In this webinar, Pawel Popiolek of Sabre and Stefano Doni, CTO of Akamas, shared Sabre’s journey, highlighting the introduction of Akamas for performance optimization challenges and covering practical use cases, including K8s and JVM tuning that led to 50% cost reduction.

    Pawel and Stefano discussed:

    • The benefits of AI for performance engineering
    • How and why to use Akamas
    • How AI can help performance engineers work better and faster

    Watch the recording for a comprehensive session on AI-enhanced performance optimization.

  • The dark side of Kubernetes? Light it up with application-aware optimization

    The dark side of Kubernetes? Light it up with application-aware optimization

    There is a “dark side” to Kubernetes that makes it difficult to ensure the desired performance and resilience of cloud-native applications, while also keeping their costs under control.

    Indeed, the combined effect of Kubernetes resource management mechanisms and application runtime heuristics may cause serious performance and resilience risks.

    • How to best align the sizing of the container with the resource requirements of the applications running on it?
    • How to avoid CPU throttling impacting response time?
    • What is the right heap size and Garbage Collection type to minimize cost and avoid out-of-memory (OOM) issues?

    On the other hand, there are also significant potential improvements, both in terms of performance and efficiency, that can be achieved by properly tuning Kubernetes and application runtime (e.g. JVM, Golang) configuration settings.

    In this webinar, we illustrate how the Akamas AI-powered optimizations platform addresses these challenges by making it possible to set the optimization goals (e.g. cost reduction) and constraints (e.g. performance SLOs) and get recommendations on how to adjust configuration settings dynamically under the varying workload.

  • Continuous optimization with AI

    Continuous optimization with AI

    On May 18 we joined our partners Performetriks and Mediro ICT, for a new edition of the Core Banking Performance Optimization online event. 

    During the event, we presented our AI-powered optimization approach and platform by introducing how Akamas can automatically identify the best configurations for the multitude of parameters of today’s IT stacks while also supporting developers, performance engineers and SRE teams in evaluating different options recommended by Akamas to deliver maximum service performance and resilience, at minimum cost. 

    Watch the video to learn more about Akamas benefits for delivering high-quality, cost-effective services.

  • Moviri speech at KubeCon + CloudNativeCon Europe 2022

    Moviri speech at KubeCon + CloudNativeCon Europe 2022

    Akamas strategic partner Moviri, delivered a presentation on “Getting the optimal service efficiency that autoscalers won’t give you” at KubeCon + CloudNativeCon Europe 2022 conference.

    Mauro Pessina, Manager Performance Engineering Business Line at Moviri, shared the results of extensive tuning activities performed on a Kubernetes microservices application to minimize cloud cost without compromising on its performance.

    To learn more about Kubernetes optimization with Akamas, visit the Akamas Kubernetes Optimization solution page and the Akamas resource center.

  • Conf42 Cloud Native: Cheap or fast? How we got both by leveraging ML to automatically tune K8s apps

    Conf42 Cloud Native: Cheap or fast? How we got both by leveraging ML to automatically tune K8s apps

    One of the top benefits of Kubernetes is efficiency. Nevertheless, several companies adopting Kubernetes may experience high costs and performance issues. The challenge of manually tuning Kubernetes applications is well known to Performance Engineers and SREs.

    During his session at Conf42 Cloud Native 2022, Giovanni Gibilisco (Head of Engineering at Akamas) talks about this challenge and also shows how to overcome it thanks to a new approach that leverages ML techniques.

    In the first part of his speech, Giovanni explains some less-known facts about Kuberenetes resource management and auto-scaling mechanisms. He then demonstrates how to get a Kubernetes microservices application automatically tuned for both pod and runtime configurations. The real-world case presented refers to an organization whose optimization goal was to both minimize the Kubernetes cost and maximize the application throughput, while also matching their SLOs.

  • How to leverage ML-based optimization to balance Kubernetes performance, resilience and cost-efficiency

    How to leverage ML-based optimization to balance Kubernetes performance, resilience and cost-efficiency

    Akamas-CMG-clouds-top-trends-2022-webinar

    Stefano Doni (Akamas CTO) was one of the guest experts for the CMG virtual event on “Cloud’s top trends for 2022”.

    During his speech, Stefano describes how to address the challenge of ensuring that cloud-native applications get optimized in terms of service quality, resilience and cost-efficiency.

    Stefano illustrates how ML-powered optimization makes it possible to automatically tune Kubernetes microservices applications, without spending hours of manual tuning. A real-case is discussed where the cloud bill was significantly reduced while better performance and resilience was achieved by simply letting ML automatically tune Kubernetes pod and application runtime configurations once the cost-reduction goal and the constraints reflecting SLOs in place were stated.

  • Using ML to automatically optimize Kubernetes for cost efficiency & reliability

    Using ML to automatically optimize Kubernetes for cost efficiency & reliability

    Stefano Doni (CTO Akamas) presented his talk during the Meetup hosted by IBM Research for the Cloud Technology in the North, on March 14th.

    During his speech, Stefano covers key Kubernetes resource management concepts and demonstrates how machine learning techniques can enable developers and SREs to automatically identify the size of pod resources that both minimizes infrastructure cost and improves application performance and reliability.

  • Performance Engineering – a bright future?

    Performance Engineering – a bright future?

    During our first Ignite by Akamas, Scott Moore (Head of Customer Engineering at Tricentis) and Stefano Doni (CTO at Akamas) dug deep into the challenges of every performance engineering expert.

    Watch the video to hear how our experts addressed the very interesting questions received before and during the webinar on the topics of how Performance Engineering is changing, performance testing tools & methodologies, and optimization of Kubernetes and microservices applications.

  • Pimp my Kubernetes clusters – make it cheap and cool

    Pimp my Kubernetes clusters – make it cheap and cool

    During our “Ignite by Akamas” webinar, Henrik Rexed (Cloud Native Advocate at Dynatrace) and Stefano Doni (CTO at Akamas) talked about the benefits of Kubernetes and the challenges when optimizing it.

    Watch the video to learn how to successfully overcome the challenges of Kubernetes by using machine learning to optimize Kubernetes applications for balancing reliability, performance, and cost. In this video, the benefits of leveraging Akamas and Dynatrace bi-directional integration for full-stack observability and optimization are also discussed.

  • How AI optimization debunks 4 long-standing Java tuning myths

    How AI optimization debunks 4 long-standing Java tuning myths

    This presentation by Stefano Doni (Akamas CEO) at Performance Summit 2021 in London illustrates how AI-based optimization provides a more effective approach to Java Tuning that also helps to debunk some common JVM tuning myths based on established industry approaches and beliefs.

    Key questions and debunked myths:

    1. Garbage Collector tuning: do the industry guidelines and metrics always lead to better application performance?
    2. Tradeoff among Latency, Throughput, Footprint: is there a way to improve these aspects at the same time?
    3. JVM tuning or not: is it worth tuning the JVM, or is it good enough for ensuring better performance to simply use the latest and greatest JVM?
    4. JVM tuning once for all: is it really a good practice to copy & paste known “good” VM configurations across different applications?
  • Automating performance tuning with machine learning

    Automating performance tuning with machine learning

    Efficiency is one of Kubernetes’ top benefits, yet companies adopting Kubernetes often experience high infrastructure costs and performance issues, with applications failing to match latency SLOs. Even for experienced Performance Engineers and SREs, sizing of resource requests and limits to ensure application SLOs can be a real challenge due to the complexity of Kubernetes resource management mechanisms.

    In this talk at USENIX SREcon on 2021, October 14th, Stefano Doni (Akamas CTO) describes how AI techniques help optimize Kubernetes and match SLOs. The first part of the presentation Stefano covered the Kubernetes resource management concepts from a performance and reliability perspective. In the second part of the talk, the AI techniques are applied to a real-world cloud-native application to achieve the perfect balance among low costs and optimal application performance and reliability.