Skip to content
Snippets Groups Projects
  1. Nov 26, 2020
    • Bhatu's avatar
      Support for Split operation · f416af1e
      Bhatu authored
      We support splitting of a tensor along an axis into n pieces, where n
      has to be a constant.
      Eg:
        Split(Tensor of shape(5,30), splits=3, axis=1)
        returns 3 tensors of shape(5,10) each.
      
      Currently we do not suport splitting into tensors of specified shape
      (num_or_size_splits) though that functionality will be added later.
      
      We also do not support splitting into n pieces where n is a runtime
      value because we do not support run-time code generation yet.
      
      This also adds support in the frontend for an op to return multiple
      values.
      f416af1e
  2. Aug 15, 2020
  3. Jul 02, 2020
    • Nishant Kumar's avatar
      Updated compiler changes (#62) · c10e8f5b
      Nishant Kumar authored
      * With new compiler changes
      
      * After backend interface cleanup
      
      * Interface cleaned up
      
      * Removed funcSSCons and FLOAT_PRECISION fix
      
      * funcSSCons right fix1
      
      * Cleanup of several things ; Athos and EzPC changes for 2PC
      
      * Residual changes
      
      * More changes
      
      * One left file
      Unverified
      c10e8f5b
  4. May 28, 2020
    • shubham ugare's avatar
      ONNX compiler and bunch of operations for large DNNs (shufflenet, mobilenet, ..) (#58) · 42278e88
      shubham ugare authored
      
      * Added node in SeeDot ast for Conv2DBackPropInput.
      
      * Added conv3d to SeeDot ast.
      
      * Dumping ONNX to SeeDot compiler code
      
      * Conv3d added througout seedot. Impl left.
      
      * Added 64-bit ezpc library functions for conv3d.
      
      * Added 32-bit ezpc library functions for conv3d.
      
      * Support for relu 5d in ezpc library.
      
      * Cleaning of code and some additional nodes
      
      * Added ConvTranspose2D through seedot. Impl left.
      
      * ConvTranspose2D impl done. Testing.
      
      * ConvTranspose2D done.
      
      * ConvTranspose3D done.
      
      * Handling Onnx inputs in seedot
      
      * Pickling the SeeDot AST
      
      * Added Conv3d and Gemm nodes
      
      * Input is reshaped to work with SeeDot
      
      * Reshape before and after each onnx node
      
      * Removed minor bugs
      
      * Added ConvTranspose and tested on Prostrate
      
      * Added input dumping and conv2dTranspose
      
      * Added a script to compile onnx model to cpp
      
      * Added FusedBatchNorm and MatAdd for 3D spatial dimension
      
      * resolved minor bug
      
      * Add support for transposem implicit broadcasting in matadd/mul/div, padding and createcopy for 5d.
      
      -- Transpose:
      We generate code based on the constant values of the perm input.
      
      -- Implicit broadcasting:
      If any of the dims in the input is 1, the values along that dimension need to be
      broadcasted so as to match the output dimension.
      
      We add ternary operators that check at each iteration of that dim if the inputs
      have 1 as dim, and chosse array indexes appropriately.
      
      ToDo: We are adding runtime overhead unnecessarily. The shapes are know at
      compile time. We can generate code tailored to the input instead of making
      function calls to general functions.
      
      As a workaround, for now, we add always_inline attribute to those function
      definitions. The compiler will then eliminate all the ternary operations.
      
      -- ToDo: Generate code for padding/copy instead of specializing for tensor ranks.
      
      * updated script to take model name
      
      * input and output is stored as numpy arrays
      
      * moved utility function to common.py
      
      * Added debugging info
      
      * onnx run can now print intermediate values
      
      * Simplified debugging by logging onnx output and cpp output for selected intermediate onnx node
      
      * Added Readme with introduction and debugging information
      
      * Updated the README
      
      * Removed bugs, works with resnet18
      
      * More logging and more verbose output
      
      * Working on Resnet50, added and updated required nodes
      
      * Renaming output and debug logging file
      
      * Added Transpose, split, Concat, Constant onnx operations for other models
      
      * removed minor script bugs
      
      * restructuring of the code
      
      * run onnx using tf backend for the cases when the onnxruntime does not support all operations
      
      * tf backend onnx run works in debug mode
      
      * minor issue with output names
      
      * Initializing a test module
      
      * Updated tests
      
      * changing the compile script output location
      
      * Added relu test and changed the name of testing models
      
      * change in temp file location for run_onnx_tf
      
      * Removed bug from Conv2DTranspose
      
      * Added convTranspose 3d test and renamed CI/CO
      
      * For testing on the VM
      
      * minor bug
      
      * Added failing test
      
      * Minor changes to the tests
      
      * Replacing faster convTranspose and test with stride > 1
      
      * Automatically add openmp mulithreading instructions to the cpp code
      
      * Removed Data race bug
      
      * fixed conv2d stride bug
      
      * support for depthwise convolution through matrix multiplication
      
      * works on sufflenet
      
      * Some added some util functions and tests
      
      * Fixing array sizes in conv2d
      
      * Add Pad support in ONNX
      
      TODO: Convert paddings to be taken as a public constant array instead
      of private input.
      
      * Add FusedBatchNormV3 Support
      
      * Add script to manually remove onnx nodes and change outputs.
      
      This was made to remove output nodes at shufflenet like
      Softmax, Sigmoid, Argmax, ArrayFeatureExtractor, ZipMap.
      
      * Scripts to convert keras models to onnx or tensorflow protobufs.
      
      Keras models are first configured to inference mode and then the
      conversions are done.
      
      For conversion to onnx we need to fix input size and then run
      shape inference. ONNXCompiler expects fixed size inputs.
      
      * Fix input batch size in output onnx model.
      
      * addressed comments
      
      * rebase to master
      
      * addressed comments
      
      * addressed comments
      
      Co-authored-by: default avatarNishant Kumar <t-niskum@microsoft.com>
      Co-authored-by: default avatarBhatu <prbhatu@microsoft.com>
      Unverified
      42278e88
  5. Sep 18, 2019
  6. Sep 13, 2019
Loading