|
| 1 | +## Compiling FAQ |
| 2 | +### Environment Requirement |
| 3 | + |
| 4 | +cmake 3.10+ |
| 5 | +gcc 4.9+ |
| 6 | +protobuf 3.0+ |
| 7 | + |
| 8 | +__Remember to run cmake again after upgrading gcc.__ |
| 9 | + |
| 10 | + |
| 11 | +### schema/generate.sh Relative Errors |
| 12 | + |
| 13 | +``` shell |
| 14 | +*** building flatc *** |
| 15 | +CMake Error: Could not find CMAKE_ROOT !!! |
| 16 | +``` |
| 17 | + |
| 18 | +If the script fails with error above, your CMake was not installed correctly. |
| 19 | + |
| 20 | +Try```sudo apt install extra-cmake-modules```or```export CMAKE_ROOT=/path/to/where_cmake_installed```to fix it. |
| 21 | + |
| 22 | +__Remember to run schema/generate.sh after editing schema (*.proto).__ |
| 23 | + |
| 24 | + |
| 25 | +### tools/script/get_model.sh Relative Errors |
| 26 | + |
| 27 | +``` shell |
| 28 | +Could NOT find Protobuf (missing: Protobuf_INCLUDE_DIR) |
| 29 | +``` |
| 30 | + |
| 31 | +``` shell |
| 32 | +Unrecognized syntax identifier "proto3". This parser only recognizes "proto2". |
| 33 | +``` |
| 34 | + |
| 35 | +If the script fails with errors above, your protobuf was not installed correctly. Follow [Protobuf's Installation Instructions](https://github.com/protocolbuffers/protobuf/blob/master/src/README.md) to install it. |
| 36 | + |
| 37 | +If there are multiple protobufs are installed and conflicts with each other, you could try solutions below: |
| 38 | + |
| 39 | +``` shell |
| 40 | +which protoc |
| 41 | +# comment the output path in .bashrc if it do NOT direct to the correct protoc. |
| 42 | +source .bashrc |
| 43 | +sudo ldconfig |
| 44 | +``` |
| 45 | + |
| 46 | +or |
| 47 | + |
| 48 | +``` shell |
| 49 | +# uninstall |
| 50 | +sudo apt-get remove libprotobuf-dev |
| 51 | +sudo apt-get remove protobuf-compiler |
| 52 | +sudo apt-get remove python-protobuf |
| 53 | +sudo rm -rf /usr/local/bin/protoc |
| 54 | +sudo rm -rf /usr/bin/protoc |
| 55 | +sudo rm -rf /usr/local/include/google |
| 56 | +sudo rm -rf /usr/local/include/protobuf* |
| 57 | +sudo rm -rf /usr/include/google |
| 58 | +sudo rm -rf /usr/include/protobuf* |
| 59 | + |
| 60 | +# install |
| 61 | +sudo apt-get update |
| 62 | +sudo ldconfig |
| 63 | +sudo apt-get install libprotobuf* protobuf-compiler python-protobuf |
| 64 | +``` |
| 65 | + |
| 66 | +### Cross-compile on Windows |
| 67 | + |
| 68 | +Cross-compile on Windows is not supported currently. You may try https://github.com/microsoft/Terminal with Linux subsystem including. |
| 69 | + |
| 70 | + |
| 71 | +### Quantized Models |
| 72 | + |
| 73 | +We support TensorFlow Quantized Models for now. And we plan to provide a model quantizing tool based on MNN model format, which is training free. |
| 74 | + |
| 75 | + |
| 76 | +### Unsupported Operations |
| 77 | + |
| 78 | +``` shell |
| 79 | +opConverter ==> MNN Converter NOT_SUPPORTED_OP: [ ANY_OP_NAME ] |
| 80 | +``` |
| 81 | + |
| 82 | +If the MNNConverter fails with error above, one or more operations are not supported by MNN. You could submit an issue or leave a comment at pinned issue. If you want to implement it yourself, You can follow [our guide](AddOp_EN.md). Pull requests are always welcome. |
| 83 | + |
| 84 | + |
| 85 | +__TensorFlow SSD model is not supported -- usage of TensorFlow Object API produces some unsupported control logic operations in post-processing part. And the TensorFlow SSD model is not as efficient as Caffe SSD model. So, it is recommended to use the Caffe version SSD model.__ |
| 86 | + |
| 87 | + |
| 88 | +## Runtime FAQ |
| 89 | + |
| 90 | +### What is NC4HW4 Format ? |
| 91 | + |
| 92 | +The difference between NCHW and NC4HW4 is just like the difference between color representing method planar and chunky. Imagine a 2x2 RGBA image, in planar representing (NCHW), its storage would be `RRRRGGGGBBBBAAAA`; and in chunky representing (NC4HW4), its storage would be `RGBARGBARGBARGBA`. In MNN, we pack each 4 channels for floats or 8 channels for int8s to gain better performance with SIMD. |
| 93 | + |
| 94 | +You can obtain tensor's format through ```TensorUtils::getDescribe(tensor)->dimensionFormat```. If it returns `MNN_DATA_FORMAT_NC4HW4`, the channel dim is packed, which may cause tensor's elementSize be greater than product of each dimension. |
| 95 | + |
| 96 | +### How to Convert Between Formats ? |
| 97 | + |
| 98 | +You can convert tensor format using codes below: |
| 99 | + |
| 100 | + |
| 101 | +``` c++ |
| 102 | +auto srcTensor = Tensor::create({1, 224, 224, 3}, Tensor::TENSORFLOW); |
| 103 | +// ... set srcTensor data |
| 104 | +auto dstTensor = net->getSessionInput(session, NULL); |
| 105 | +dstTensor->copyFromHostTensor(srcTensor); |
| 106 | +``` |
| 107 | +
|
| 108 | +### Why does output tensor data copying so slow on GPU backend? |
| 109 | +
|
| 110 | +If you do not wait for GPU inference to be finished (through runSessionWithCallback with sync), copyToHostTensor has to wait for it before copying data. |
| 111 | +
|
0 commit comments