Return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Killed

/home/ljx/.local/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Killed
使用jetpack4.6.1
jetson nano2gb
b7ea25409500febf5a5405596e0809d

Hi,

Usually, ‘killed’ is caused by running out of memory.
Could you try to monitor the system with tegrastats to see if the memory is enough for your use case?

$ sudo tegrastats

Thanks.

Do you have a way to add shared memory, but it doesn’t work for me


It’s normal to come here

import matplotlib
matplotlib.use(‘TkAgg’)

import matplotlib.pyplot as plt
import matplotlib.pylab as pylab

import requests
from io import BytesIO
from PIL import Image
import numpy as np

this makes our figures bigger

pylab.rcParams[‘figure.figsize’] = 20, 12

from maskrcnn_benchmark.config import cfg
from predictor import COCODemo

config_file = “…/configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml”

update the config options with the config file

cfg.merge_from_file(config_file)

manual override some options

cpu or cuda

cfg.merge_from_list([“MODEL.DEVICE”, “cuda”])

coco_demo = COCODemo(
cfg,
min_image_size=800,
confidence_threshold=0.7,
)

def load(url):
“”"
Given an url of an image, downloads the image and
returns a PIL image
“”"
response = requests.get(url)
pil_image = Image.open(BytesIO(response.content)).convert(“RGB”)
# convert to BGR format
image = np.array(pil_image)[:, :, [2, 1, 0]]
return image

def imshow(img):
plt.imshow(img[:, :, [2, 1, 0]])
plt.axis(“off”)

from COCO - Common Objects in Context

image = load(“https://farm3.staticflickr.com/2469/3915380994_2e611b1779_z.jpg”)
image2 = image[:, :, [2, 1, 0]]
plt.imshow(image2)
plt.show()

compute predictions

predictions = coco_demo.run_on_opencv_image(image)
imshow(predictions)
plt.show()

您好,这个能解决吗

Continuing the discussion from Return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Killed:

import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6, 7, 8])
x
tensor([1, 2, 3])
y
tensor([4, 5, 6, 7, 8])
grid_x, grid_y = torch.meshgrid(x, y)
/home/ljx/.local/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]

Hi,

Based on the tegrastats output, you are running out of memory.
Unfortunately, GPU accessible memory is fixed and cannot be increased with the swap approach.

Since Nano 2 GB resource is quite limited, you might need to use another lightweight model.
Or you can get a device with more memory to solve this issue directly.

Thanks.

Thank you very much. If there is a lightweight model, I am willing to try again, or if you have a recommendation, you can share it, ha ha

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.