New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build] Can not build 1.17.1 for NodeJS #20082
Comments
There are some code changes and we are including them in 1.17.3. this patch release will be out soon |
Any news on this? I have managed to build the current master from source (onnxruntime-node), but using npm we are still at 1.17.0. |
Now that 1.17.3 has been finally released, onnxrutime-web is on 1.17.3 but onnxruntime-node is still on 1.17.0... |
The package tarball is generated successfully, however its size is too large and is rejected by NPM registery. I did this change to main branch to fix this issue. I also created a PR for the same change on branch |
This sounds great! But does it mean that we will have to wait for the next release for it to be actually usable? |
I am still trying to get the change into 1.17.3.... UPDATE: onnxruntime-node@1.17.3 is published. @nemphys |
Just tried it and I unfortunately get this error:
Apparently the CUDA v12 archive URL is not valid. |
It seems that the correct filename is onnxruntime-linux-x64-gpu-cuda12-1.17.3.tgz (the "-gpu" part is missing) |
Can not create const session = onnxruntime.InferenceSession.create(
"mnist-12.onnx",
{
"executionProviders": [
{
name:"tensorrt"
},
{
name: "cuda"
},
{
name: "cpu"
}
]
}
)
|
I just tried the new version on a linux x64 machine and (without any cuda-related parameter) the installation succeeds, probably downloading the default cuda11 binaries (the specific server has both cuda 11 and 12 installed, no idea why 11 is selected). The problem is that the file libonnxruntime_providers_shared.so has a size of 0 bytes (empty) and therefore the error @pbk20191 mentions above arises during runtime. Apart from that, I think that having to specify a runtime parameter during npm install in order to set the desired preference regarding cuda version/skip is suboptimal. It would be much better to be able to somehow define this preference inside the parent application's package.json, so that a simple npm install would perform the desired operation without any extra/custom parameters. Our app is automatically deployed to various servers (of different architectures) using ansible followed by a call to npm install, so defining runtime parameters is not something we would like to include in out deployment flow. |
@fs-eire I confirm that the new rev.1 works fine without any issues! |
|
Closing the issue since, it is resolved from new release |
Describe the issue
I can build for cpu only but I can not complete the build for other EPs. I can not find what is going wrong, cause the log gives me too few infomations.
build command just fails with non zero exit code. After the build I can install npm package from source with "onnxruntime_binding.node", "DirectML.dll", "onnxruntime.dll", but "onnxruntime_providers_shared.dll" is missing.
I guess there is something goes wrong right after making node binding?
Urgency
No response
Target platform
Window 11(x64), cuda 12.4, cudnn 9.0 (12.3)
Build script
.\build.bat --config Release --parallel --build_nodejs --use_dml --use_cuda --use_tensorrt --tensorrt_home "C:\SystemSource\NVIDIA GPU Computing Toolkit\TensorRT\8.6.1.6" --build_shared_lib
Error / output
Visual Studio Version
2022 17 community
GCC / Compiler Version
No response
The text was updated successfully, but these errors were encountered: