Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For the cluster specifically you need to work with Singularity instead of Docker. To make this work you need to have the Singularity Patch installed. This patch was merged into the dev main branch of FINN recently, however if you work on an older version (for example main or your custom own one), you still need to install it. To point FINN to where you have put your Singularity image, you need to set the

Code Block
 export FINN_SINGULARITY=oras:/opt/ghcr.io/eki-project/experiment-manager/finn_apptainer_xilinx:devsoftware/FPGA/finn/finn-dev-latest.sif

environment variable. (Remember to update after changing your configuration - if you have set the field pointing to the .sif file in your config and use the cluster environment, that variable is automatically set accordingly).

After this, FINN will be set up and ready to use. Keep in mind that currently, In case the default repository and branch are set points to our internal development fork, which means that if you do not change these settings, your builds might fail because of our work in progress or because you don't have permissions to access the private git. For steady usage, clone from the official FINN repository and main branchone, change it to the official one for stable usage.

Usage

To use FINN, you first need a network to start with. These networks come in the form of an ONNX file, which is a data format for sharing and distributing neural network architectures independent of the framework they are used by. Building your own quantized network is done with Brevitas. However, since getting into Brevitas and the Quantization Aware Training (QAT) topic is also fairly complicated, you can use the sample ONNX model and build.py file supplied by the FINN developers themselves. The sample model can be found at this path

...

This will look for a directory called model and try to execute the build file within. To do this a slurm job is started. This will convert the model to HLS and synthesize it (broadly speaking; for an exact overview of what FINN does, refer to their documentation linked above).

Tips

It might be that when starting a run, the script warns you that VIVADO_PATH, VITIS_PATH, HLS_PATH might not be set or set incorrectly. In case of working on the cluster this warning can be ignored, since the module system takes care of the paths. In case the tools are later on not found or you are working locally, you can still enter values into these paths as a possible fix.

Remember, that to use FINN on the cluster you need to be in a compute time project. If the job does not start properly for that reason, you can either set the corresponding sbatch flag in the build script manually, or, in case you don’t want to change the script after every update of your config, set the relevant environment variable which defines the standard project to use for slurm jobs.

The results of the synthesis can be found in <project-directory>/<output-directory>/ (in the sample case model/out_dir/). Make sure to take a good look around the output folder and note all files residing there, as this will help you quite a lot when debugging your own networks and FINN runs.

...