Midokura Technology RadarMidokura Technology Radar
Adopt

Why?

  • Training large models and running high-throughput inference requires dense GPU racks.
  • GPUs remain the dominant commodity hardware for deep learning, with strong software ecosystem (CUDA, cuDNN, PyTorch, TensorFlow).
  • On-prem and hybrid deployment options are strategic for performance, cost, and data governance.

What?

  • Standardize on dense GPU server designs (A100/H100 or equivalent), rack-level layout, and procurement strategies.
  • Invest in GPU fleet management, node provisioning, and tenancy models (single-tenant vs multislot sharing).
  • Evaluate cloud vs on-prem tradeoffs and build repeatable reference architectures for GPU farms.