I'm using Tensorflow 2.1 git master branch (commit id:db8a74a737cc735bb2a4800731d21f2de6d04961) and compile it locally. Playing around with the C API to call TF_LoadSessionFromSavedModel
but seems to get segmentation fault. I've managed to drill down the error in the sample code below.
TF_NewTensor
call is crashing and causing a segmentation fault.
int main()
{
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
int ndims = 1;
int64_t* dims = malloc(sizeof(int64_t));
int ndata = sizeof(int32_t);
int32_t* data = malloc(sizeof(int32_t));
dims[0] = 1;
data[0] = 10;
// Crash on the next line
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR");
}
else
{
printf("OK");
}
return 0;
}
However, when i move the TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
after the TF_NewTensor
call, it doesn't crash. Like below:
int main()
{
int ndims = 1;
int64_t* dims = malloc(sizeof(int64_t));
int ndata = sizeof(int32_t);
int32_t* data = malloc(sizeof(int32_t));
dims[0] = 1;
data[0] = 10;
// NO more crash
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR");
}
else
{
printf("OK");
}
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
return 0;
}
Is it a possible bug or I'm using it wrong? I don't understand how malloc
q an independent variable could cause a segmentation fault.
can anybody reproduce?
I'm using gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008 to compile.
UPDATE:
can be further simplified the error as below. This is even without the InputValues
being allocated.
#include <stdlib.h>
#include <stdio.h>
#include "tensorflow/c/c_api.h"
int main()
{
int ndims = 1;
int ndata = 1;
int64_t dims[] = { 1 };
int32_t data[] = { 10 };
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR Tensor");
}
else
{
printf("OK");
}
return 0;
}
compile with
gcc -I<tensorflow_path>/include/ -L<tensorflow_path>/lib test.c -ltensorflow -o test2.out
Solution
As point up by Raz, pass empty deallocater
instead of NULL, and ndata
should be size in terms of byte.
#include "tensorflow/c/c_api.h"
void NoOpDeallocator(void* data, size_t a, void* b) {}
int main(){
int ndims = 2;
int64_t dims[] = {1,1};
int64_t data[] = {20};
int ndata = sizeof(int64_t); // This is tricky, it number of bytes not number of element
TF_Tensor* int_tensor = TF_NewTensor(TF_INT64, dims, ndims, data, ndata, &NoOpDeallocator, 0);
if (int_tensor != NULL)\
printf("TF_NewTensor is OK\n");
else
printf("ERROR: Failed TF_NewTensor\n");
}
checkout my Github on full code of running/compile TensorFlow's C API here
InputValues
being used anywhere, so how could that be the cause of the crash?malloc
forInputValues
succeed? I don;t see you check that. If it failed, then in your first code al the other mallocs could fail too.ndata
denote the size ofdata
? Because you only allocate one int32 to it.InputValues
in this example, but in actual code, I use it to call theSessionRun
API.malloc
does return non-null values, I did try withndata = 1
as Raz Haleva suggested as well. Still same segmentation fault