Skip to content
Snippets Groups Projects
  1. Jan 20, 2021
  2. Jan 12, 2021
  3. Jan 10, 2021
    • Bhatu's avatar
      Update disclaimer · 055da16c
      Bhatu authored
      055da16c
    • Bhatu's avatar
      Add unittests and related scripts · af13174c
      Bhatu authored
      To run tests, navigate to Athos/tests directory (or provide path to pytest):
      1. Run all tests with target as CPP. Backend can be CPP,3PC,2PC_HE,2PC_OT
         pytest -rs . --backend="CPP"
      2. Run a specific test.
         pytest -rs . -k "test_arith_binop" --backend="CPP"
      3. Run and generate a coverage report
         pytest --cov --cov-report html --cov-config=pytest_coverage_tf.config .
      
      Install pytest and pytest-cov to run the above commands.
      af13174c
    • Bhatu's avatar
      Fix CreateTensor signature for 64 bit case · 6999cc02
      Bhatu authored
      6999cc02
    • Bhatu's avatar
      Typo fix in seedot · 5420ee02
      Bhatu authored
      5420ee02
    • Bhatu's avatar
      Handle TransformGraph for cases where constant outputs where converted to vars. · 818b8c2c
      Bhatu authored
      If the graph has a constant output, it will get converted to a variable. While
      dumping graph_defs TransformGraph needs to be able to find that output. So we
      teach it to find the newly created variable.
      818b8c2c
    • Bhatu's avatar
    • Bhatu's avatar
      Add support to directly link code with SCI, Porthos · 4346e966
      Bhatu authored
      Now user does not have to manually copy generated cpp files into network
      directories of SCI and Porthos and edit CMakeFiles. Once SCI and Porthos
      libraries are built the CompileTFGraph.py can directly compile the generated
      cpp files and link them against SCI/Porthos.
      4346e966
    • Bhatu's avatar
      Always checkout Eigen3 and SEAL while building SCI · 7e289848
      Bhatu authored
      Locally check them out in extern/ directory so that programs can be linked
      against SCI without manually putting them in the networks directory and
      modifying the CMake file.
      
      See CompileTFGraph.py for seeing how to build and link programs directly.
      7e289848
    • Bhatu's avatar
      Pass key directory as argument for Porthos. · a349497b
      Bhatu authored
      Previously programs linked with porthos, expected the keys to be in the
      current_directory/files folder. Now users can explicitly pass the directory
      while invoking the program. The keys still have to be named keyA, keyAB, keyB
      and keyD but they can be located in arbitary folders.
      
      So party 0 will invoke the program as:
        ./program 0 files/addresses_file path/to/keys/dir
      a349497b
  4. Dec 23, 2020
    • Bhatu's avatar
      Add grappler opts and new script to compile graphs. · 895c57d2
      Bhatu authored
      Usage:
        python CompileTFGraph.py --config config.json
      
      where a sample config.json looks like this:
      
      {
        "model_name":"full_kernel.pb",
        "input_tensors":{
            "actual_input_1":"2,245,234,3",
            "input2":"2,245,234,3"
        },
        "output_tensors":[
          "output1",
          "output2"
        ],
        "scale":10,
        "bitlength":63,
        "mode":"SCI",
        "save_weights" : true
      }
      
      Do python CompileTFGraph.py --help for seeing all the options.
      895c57d2
  5. Dec 22, 2020
  6. Dec 21, 2020
  7. Nov 26, 2020
    • Bhatu's avatar
      Improvements to compiler scripts. · 2dd0ce4d
      Bhatu authored
      2dd0ce4d
    • Bhatu's avatar
      Fix tf size inference for multiple output tensors. · 7a0f955a
      Bhatu authored
      There was an assumption that ops only have single tensor outputs.
      However ops like split return multiple tensors. This fixes that.
      7a0f955a
    • Bhatu's avatar
      Add a new garbage collector pass. · 07901e61
      Bhatu authored
      For identity like ops (a=b), we sometimes run into use-after-free and
      double-free bugs.
      
      For this snippet
          J100 = J99
          J101 = J99 + 3         <- last use of J99
          J102 = J100 * 2        <- last use of J100
      before we were doing:
          J100 = J99
          J101 = J99 + 3
          free(J99)
          J102 = J100 * 2        <- use-after-free
          free(J100)             <- double-free
      now we do:
          J100 = J99
          J101 = J99 + 3
          J102 = J100 * 2
          free(J100)
      
      Algorithm:
      We iterate through the program in reverse order and every time we see a
      use of a variable, we insert a free after it, unless we have already
      freed it before. When we check a variable has been freed, we also check
      whether any of its aliases have also been freed.
      
      For alias analysis, we maintain alias sets using disjoint sets. Whenever
      we encounter an a=b statement, we simply do a union of a and b sets.
      
      This replaces the old LivenessOpti pass.
      07901e61
    • Bhatu's avatar
      Add support for broadcasting semantics for binops. · c449271d
      Bhatu authored
      We add broadcasting support for add, sub, mul and equal.
      
      The broadcasting semantics are specified here
      https://numpy.org/doc/stable/user/basics.broadcasting.html
      
      Say we are given input
      A (4d array):  8 x 1 x 6 x 1
      B (3d array):      7 x 1 x 5
      
      We generate a loop with
      Result (4d array):  8 x 7 x 6 x 5
      for i0=[0:8]
        for i1=[0:7]
          for i2=[0:6]
            for i3=[0:8]
              Result[i0][i1][i2][i3] = A[i0][0][i2][0] {+,*,-,==} B[i1][0][i3]
      c449271d
    • Bhatu's avatar
      Support for reduced mean · cee44f6d
      Bhatu authored
      Adds support the reduce_mean operation in tensorflow.
      Consider the example:
        For inputs:
          Tensor of shape(s0,s1,s2,s3)
          reduction axes = [0,3]
      
        We generate the following program:
          If keep_dim == true
            output is of shape(1,s1,s2,1)
          else
            output is of shape(s1,s2)
      
          for i1=[0:s1]
            for i2=[0:s2]
              sum = 0
              for i0=[0:s0]
                for i3=[0:s3]
                  sum  = sum + input[i0][i1][i2][i3]
              output[i1][i2] = sum / (s0 * s3)        // keep_dim=false
        OR
              output[0][i1][i2][0] = sum / (s0 * s3)  // keep_dim=true
      
      TODO: Also add support for reduced sum.
      cee44f6d
    • Bhatu's avatar
      Support for Split operation · f416af1e
      Bhatu authored
      We support splitting of a tensor along an axis into n pieces, where n
      has to be a constant.
      Eg:
        Split(Tensor of shape(5,30), splits=3, axis=1)
        returns 3 tensors of shape(5,10) each.
      
      Currently we do not suport splitting into tensors of specified shape
      (num_or_size_splits) though that functionality will be added later.
      
      We also do not support splitting into n pieces where n is a runtime
      value because we do not support run-time code generation yet.
      
      This also adds support in the frontend for an op to return multiple
      values.
      f416af1e
    • Bhatu's avatar
      Taint Analysis added to type inference · 3844d8cf
      Bhatu authored
      For every tensor in the graph, we want to know it's 'taint'.
      Each tensor can have the possible taints:
        Client: Input to the ML model (eg: the image input).
        Server: The weights of the model.
        ClientXServer: A tensor that is dervied after operations on both
                       client and server tensors.
        Secret_constant: A tensor that is a constant but declared as a secret.
        Public_constant: A tensor that is a constant but declared as public.
      
      The motivation behind this analysis is to insert optimized versions of
      multiplication. If one input is from server and the other from model we
      can call
        ElemWiseActModelVectorMult (optimized)
      otherwise we can insert a call to
        ElemWiseSecretSharedVectorMult
      
      Matmul also expects one of its inputs to have the 'Server' taint which
      this analysis identifies it.
      3844d8cf
    • Bhatu's avatar
      Fix codegen for unary negation of tensors · b6b2658d
      Bhatu authored
      b6b2658d
    • Bhatu's avatar
      Don't add clearmem calls for freeing scalars · 4c04295d
      Bhatu authored
      4c04295d
    • Bhatu's avatar
      3a1af191
    • Bhatu's avatar
      Implement StopGradient as no-op · 5ff67a26
      Bhatu authored
      5ff67a26
    • Bhatu's avatar
      Automatically scale down final ouptut. · 988fa41d
      Bhatu authored
      Previously if there are mul like ops, the final output would be of
      scale = 2 * scaling_factor. Now we introduce a scale down (if required)
      so that the scale of the model output is = scaling_factor.
      988fa41d
  8. Nov 25, 2020
    • Bhatu's avatar
      Fix double scaledown bug for mul like ops. · b15c3227
      Bhatu authored
      Squarediff exposed a bug in codegen where both inputs to mul were
      same. Depending on the scale of the variable at that point, we
      sometimes do a scaledown of the inputs of multiplication so as to
      maintain precision.
          scaledown(a, scale)
          scaledown(b, scale)
          mul(a,b)
      But in this case both the inputs to mul were same so we were doing
          scaledown(a, scale)
          scaledown(a, scale)
          mul(a,a)
      This led to loss of precision. Now we just do:
          scaledown(a, scale)
          mul(a,a)
      b15c3227
    • Bhatu's avatar
      Implement SquaredDifference · b54294e8
      Bhatu authored
      We do this as a simplification on the tensorflow graph itself.
      We transform SquaredDifference(a,b) into (a-b) * (a-b).
      b54294e8
    • Bhatu's avatar
      Remove double quotes from attributes · 39e78075
      Bhatu authored
      Remove the "" from attributes while parsing the graph def itself.
      eg: "\"dtype\"" -> "dtype"
      So we can directly refer to the attributes without adding double quotes to them.
      39e78075
    • Bhatu's avatar
      Add support for float16 tensor types. · 2172e484
      Bhatu authored
      2172e484
  9. Aug 31, 2020
  10. Aug 23, 2020
Loading