- Jan 20, 2021
-
-
Pratik Bhatu authored
-
Bhatu authored
-
Bhatu authored
Usage: python CompileSampleNetworks.py Networks/sample_network.config This will compile and run ResNet by creating a tmux session named ResNet. Do python CompileSampleNetworks.py --help to see full usage.
-
- Jan 12, 2021
- Jan 10, 2021
-
-
Bhatu authored
-
Bhatu authored
To run tests, navigate to Athos/tests directory (or provide path to pytest): 1. Run all tests with target as CPP. Backend can be CPP,3PC,2PC_HE,2PC_OT pytest -rs . --backend="CPP" 2. Run a specific test. pytest -rs . -k "test_arith_binop" --backend="CPP" 3. Run and generate a coverage report pytest --cov --cov-report html --cov-config=pytest_coverage_tf.config . Install pytest and pytest-cov to run the above commands.
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
If the graph has a constant output, it will get converted to a variable. While dumping graph_defs TransformGraph needs to be able to find that output. So we teach it to find the newly created variable.
-
Bhatu authored
-
Bhatu authored
Now user does not have to manually copy generated cpp files into network directories of SCI and Porthos and edit CMakeFiles. Once SCI and Porthos libraries are built the CompileTFGraph.py can directly compile the generated cpp files and link them against SCI/Porthos.
-
Bhatu authored
Locally check them out in extern/ directory so that programs can be linked against SCI without manually putting them in the networks directory and modifying the CMake file. See CompileTFGraph.py for seeing how to build and link programs directly.
-
Bhatu authored
Previously programs linked with porthos, expected the keys to be in the current_directory/files folder. Now users can explicitly pass the directory while invoking the program. The keys still have to be named keyA, keyAB, keyB and keyD but they can be located in arbitary folders. So party 0 will invoke the program as: ./program 0 files/addresses_file path/to/keys/dir
-
- Dec 23, 2020
-
-
Bhatu authored
Usage: python CompileTFGraph.py --config config.json where a sample config.json looks like this: { "model_name":"full_kernel.pb", "input_tensors":{ "actual_input_1":"2,245,234,3", "input2":"2,245,234,3" }, "output_tensors":[ "output1", "output2" ], "scale":10, "bitlength":63, "mode":"SCI", "save_weights" : true } Do python CompileTFGraph.py --help for seeing all the options.
-
- Dec 22, 2020
-
-
Bhatu authored
-
Bhatu authored
Sometimes the generated graph defs specify the 0'th output explicitly. Example: node { name: "abc" input: "Placeholder_1:0" } node { name: "Placeholder_1" op: "Placeholder" } Node abc specifies that it wants the 0th output of the Placeholder_1 node. However in single output nodes, the tensor name is same as node name. So Placeholder_1:0 tensor cannot be found. The same is also true for the 0th output of multiple output nodes (output tensor names will be node_name,node_name:1,..) So while parsing the graph def we strip away any ":0" from the input names.
-
Bhatu authored
-
Bhatu authored
-
- Dec 21, 2020
-
-
Bhatu authored
Fix attribute parsing of strings. Don't do automatic scale down of argmax.
-
- Nov 26, 2020
-
-
Bhatu authored
-
Bhatu authored
There was an assumption that ops only have single tensor outputs. However ops like split return multiple tensors. This fixes that.
-
Bhatu authored
For identity like ops (a=b), we sometimes run into use-after-free and double-free bugs. For this snippet J100 = J99 J101 = J99 + 3 <- last use of J99 J102 = J100 * 2 <- last use of J100 before we were doing: J100 = J99 J101 = J99 + 3 free(J99) J102 = J100 * 2 <- use-after-free free(J100) <- double-free now we do: J100 = J99 J101 = J99 + 3 J102 = J100 * 2 free(J100) Algorithm: We iterate through the program in reverse order and every time we see a use of a variable, we insert a free after it, unless we have already freed it before. When we check a variable has been freed, we also check whether any of its aliases have also been freed. For alias analysis, we maintain alias sets using disjoint sets. Whenever we encounter an a=b statement, we simply do a union of a and b sets. This replaces the old LivenessOpti pass.
-
Bhatu authored
We add broadcasting support for add, sub, mul and equal. The broadcasting semantics are specified here https://numpy.org/doc/stable/user/basics.broadcasting.html Say we are given input A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 We generate a loop with Result (4d array): 8 x 7 x 6 x 5 for i0=[0:8] for i1=[0:7] for i2=[0:6] for i3=[0:8] Result[i0][i1][i2][i3] = A[i0][0][i2][0] {+,*,-,==} B[i1][0][i3]
-
Bhatu authored
Adds support the reduce_mean operation in tensorflow. Consider the example: For inputs: Tensor of shape(s0,s1,s2,s3) reduction axes = [0,3] We generate the following program: If keep_dim == true output is of shape(1,s1,s2,1) else output is of shape(s1,s2) for i1=[0:s1] for i2=[0:s2] sum = 0 for i0=[0:s0] for i3=[0:s3] sum = sum + input[i0][i1][i2][i3] output[i1][i2] = sum / (s0 * s3) // keep_dim=false OR output[0][i1][i2][0] = sum / (s0 * s3) // keep_dim=true TODO: Also add support for reduced sum.
-
Bhatu authored
We support splitting of a tensor along an axis into n pieces, where n has to be a constant. Eg: Split(Tensor of shape(5,30), splits=3, axis=1) returns 3 tensors of shape(5,10) each. Currently we do not suport splitting into tensors of specified shape (num_or_size_splits) though that functionality will be added later. We also do not support splitting into n pieces where n is a runtime value because we do not support run-time code generation yet. This also adds support in the frontend for an op to return multiple values.
-
Bhatu authored
For every tensor in the graph, we want to know it's 'taint'. Each tensor can have the possible taints: Client: Input to the ML model (eg: the image input). Server: The weights of the model. ClientXServer: A tensor that is dervied after operations on both client and server tensors. Secret_constant: A tensor that is a constant but declared as a secret. Public_constant: A tensor that is a constant but declared as public. The motivation behind this analysis is to insert optimized versions of multiplication. If one input is from server and the other from model we can call ElemWiseActModelVectorMult (optimized) otherwise we can insert a call to ElemWiseSecretSharedVectorMult Matmul also expects one of its inputs to have the 'Server' taint which this analysis identifies it.
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
Previously if there are mul like ops, the final output would be of scale = 2 * scaling_factor. Now we introduce a scale down (if required) so that the scale of the model output is = scaling_factor.
-
- Nov 25, 2020
-
-
Bhatu authored
Squarediff exposed a bug in codegen where both inputs to mul were same. Depending on the scale of the variable at that point, we sometimes do a scaledown of the inputs of multiplication so as to maintain precision. scaledown(a, scale) scaledown(b, scale) mul(a,b) But in this case both the inputs to mul were same so we were doing scaledown(a, scale) scaledown(a, scale) mul(a,a) This led to loss of precision. Now we just do: scaledown(a, scale) mul(a,a)
-
Bhatu authored
We do this as a simplification on the tensorflow graph itself. We transform SquaredDifference(a,b) into (a-b) * (a-b).
-
Bhatu authored
Remove the "" from attributes while parsing the graph def itself. eg: "\"dtype\"" -> "dtype" So we can directly refer to the attributes without adding double quotes to them.
-
Bhatu authored
-
- Aug 31, 2020
-
-
Deevashwer authored
-
- Aug 23, 2020
-
-
Deevashwer authored
-