Max size of array

I am doing on implementing AES algorithm in cuda. So far it is working. I am reading text files and encrypting and decrypting them. But I have a small problem with memory allocation.
For text files up to one million characters it is working, but for example, file with 4 million it is not. I think it is because I cant allocate that much memory on gpu.
How can I see how big is my array??
I’m using

typedef struct {
	unsigned char item[4][4];
} Block;
Block * plaintext;
cudaMallocHost((void**)&plaintext, num_of_blocks * sizeof(Block));

And when I try to print size of that array, result is 8.

fprintf(stdout, "size: %d \n", sizeof(plaintext));

How can I see how many bytes this array takes? On host or device, not important. I tried to use malloc_usable_size but I get warning it is undefined.

sizeof(plaintext) returns size of pointer itself, which is size_t value while %d prints int

overall, you need to learn very basics of C datatypes, pointers etc otherwise you will have problems everywhere

sizeof(*ptr) will return size required for a pointer which is 8 bytes.
Also, malloc_usable_size is throwing a warning because most probably you haven’t included malloc.h file. But, by malloc_usable_size you can find size allocated on CPU, not on GPU. If you try to use it to find memory allocated on GPU then it will give a segmentation fault.
To know how much size available on GPU you can use cudaMemGetInfo. If can be used in the following way

#include <bits/stdc++.h>
#include <cuda.h>
using namespace std;
int main(){
        size_t fre,tot;
        return 0;

I know basics and everything I just did not have time to explain in details. That fprintf is just the latest thing I tried.

This is what I wanted. Thank you.
My total memory is 2048MB. When I read text file with more than one milion characters I have about 1950MB free. But encryption and decryption returns new empty file, and it should not be empty.
Can it be because it can find one chunk of memory for those characters??

cudaMallocHost allocates (pinned) host memory, not device memory, and cudaMemGetInfo tells you nothing directly about host memory allocation. The amount of memory you are using on the host for this pinned allocation should be very close to what you asked for:

num_of_blocks * sizeof(Block)

It’s unlikely that you are running into any limits in the range of 1MB or 4MB of pinned memory.

My suggestion would be to use a debugging method to attack the problem rather than try to jump to the conclusion that it is related to memory size, and some imagined limit thereof.

Sorry for my mistake, I copied wrong line of code. I am using cudaMalloc for allocating memory on device.

Block *plaintext_dev;
	cudaStatus = cudaMalloc((void**)&plaintext_dev, real_num_of_block * sizeof(Block));
	if (cudaStatus != cudaSuccess) {
		fprintf(stderr, "Allocating memory for plaintext blocks failed!");
		goto Error;

I tried debugging with Nsight, but I would not stop at breakpoint. I will try again. Just wanted opinion of others, what could it be.
Because it would be very strange if it cant find one chunk of memory 1, 4 or 10, 20MB. It is working for text file with up to 1 million characters and I get right encrypted and decrypted output text files. But over 1 million output files are empty.
Maybe someone had similar issues that is why I asked.