import torch, gc
gc.collect()
torch.cuda.empty_cache()
with torch.no_grad()
kwargs = {'num_workers': 6, 'pin_memory': True} if torch.cuda.is_available() else {}
就去掉,这里pin_memory
可以提高速度但是占用更多内存export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32
torch.from_numpy
torch.item()将数字Tensor转化成Python变量类型