23 / 33
Back to Tech Stack
runpod logo

RunPod

Cloud GPU platform used to access high-performance compute for ML training and experimentation without long-term infrastructure commitments.

Details

RunPod

RunPod is used as an on-demand GPU compute platform to support ML training and experimentation where temporary access to high-performance hardware is required.

In this portfolio, its usage is primarily tied to academic and research work, rather than long-running production systems.

Key Capabilities

  • On-Demand GPU Access
    Provides rapid access to a range of modern GPUs for training and inference workloads.

  • Fast Provisioning Model
    Enables quick spin-up of compute resources without complex infrastructure setup.

  • Cost-Aware Compute
    Supports short-lived, intensive workloads where dedicated cloud GPU clusters would be inefficient.

  • Preconfigured ML Environments
    Offers ready-to-use environments suitable for experimentation and fine-tuning tasks.

  • Flexible Execution Modes
    Supports both instance-based and serverless-style GPU execution.

Experience & Research Contribution

Used RunPod during Master’s-level research to support compute-intensive ML workloads that required temporary access to powerful GPUs.

Key contributions included:

  • Running LLM fine-tuning and experimentation workflows
  • Processing and analyzing astronomical data, including Chandra X-ray Observatory datasets
  • Evaluating model behavior and performance under different compute configurations
  • Managing short-lived training jobs in a cost-effective manner
  • Comparing managed GPU platforms against self-managed alternatives for research use

RunPod served as a practical research enabler, providing flexible access to GPU resources without the operational overhead of maintaining persistent ML infrastructure.