Built with developer experience in mind, Tensorkube simplifies the process of deploying serverless GPU apps. Sometimes for advanced debugging use-cases you might want to know the underlying details of the deployment. This cheatsheet will equip you with the necessary commands to get the details of your deployment.Documentation Index
Fetch the complete documentation index at: https://tensorfuse.io/docs/llms.txt
Use this file to discover all available pages before exploring further.
Deploy command with all parameters
tensorkube deploy --gpus 1 --gpu-type a10g --cpu 2500 --memory 12000 --min-scale 1 --max-scale 10 --env staging --secret aws-secret --secret huggingface-secret
--cpuis in millicores, 1000 = 1vCPU--memoryis in MB, 1024 = 1GB
List all deployments
Usetensorkube list deployments --all to get the details of your deployed apps including their names, status and their HTTP endpoints.
Use tensorkube list deployments --env <env_name> for apps in an environment.
Logs of a pod
Usetensorkube deployment logs <deployment-name> --env <env-name> to get the logs of a deployment/pod. This will show you the logs of the pod and help you debug any issues that might be present in the pod.
SSH into a deployment / pod
Usetensorkube deployment ssh <deployment-name> --env <env-name> to ssh into a pod. This will help you debug any issues that might be present in the pod.
Get details of a deployment
tensorkube deployment describe <deployment-name> --env <env-name>
List all running pods
Usekubectl get pods -n <env-name> to get the details of all the running pods across all your deployments in an environment.
Status of pod before being created (only use when necessary)
Usekubectl describe pod <pod-name> to get the status of a pod before it is created. This will show you all the steps that Tensorkube takes to create a pod. It will also give you
the time each of those steps take and whether there is any scope of optimisation or not.
