- Nov 26, 2020
-
-
Bhatu authored
-
Bhatu authored
There was an assumption that ops only have single tensor outputs. However ops like split return multiple tensors. This fixes that.
-
Bhatu authored
For identity like ops (a=b), we sometimes run into use-after-free and double-free bugs. For this snippet J100 = J99 J101 = J99 + 3 <- last use of J99 J102 = J100 * 2 <- last use of J100 before we were doing: J100 = J99 J101 = J99 + 3 free(J99) J102 = J100 * 2 <- use-after-free free(J100) <- double-free now we do: J100 = J99 J101 = J99 + 3 J102 = J100 * 2 free(J100) Algorithm: We iterate through the program in reverse order and every time we see a use of a variable, we insert a free after it, unless we have already freed it before. When we check a variable has been freed, we also check whether any of its aliases have also been freed. For alias analysis, we maintain alias sets using disjoint sets. Whenever we encounter an a=b statement, we simply do a union of a and b sets. This replaces the old LivenessOpti pass.
-
Bhatu authored
We add broadcasting support for add, sub, mul and equal. The broadcasting semantics are specified here https://numpy.org/doc/stable/user/basics.broadcasting.html Say we are given input A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 We generate a loop with Result (4d array): 8 x 7 x 6 x 5 for i0=[0:8] for i1=[0:7] for i2=[0:6] for i3=[0:8] Result[i0][i1][i2][i3] = A[i0][0][i2][0] {+,*,-,==} B[i1][0][i3]
-
Bhatu authored
Adds support the reduce_mean operation in tensorflow. Consider the example: For inputs: Tensor of shape(s0,s1,s2,s3) reduction axes = [0,3] We generate the following program: If keep_dim == true output is of shape(1,s1,s2,1) else output is of shape(s1,s2) for i1=[0:s1] for i2=[0:s2] sum = 0 for i0=[0:s0] for i3=[0:s3] sum = sum + input[i0][i1][i2][i3] output[i1][i2] = sum / (s0 * s3) // keep_dim=false OR output[0][i1][i2][0] = sum / (s0 * s3) // keep_dim=true TODO: Also add support for reduced sum.
-
Bhatu authored
We support splitting of a tensor along an axis into n pieces, where n has to be a constant. Eg: Split(Tensor of shape(5,30), splits=3, axis=1) returns 3 tensors of shape(5,10) each. Currently we do not suport splitting into tensors of specified shape (num_or_size_splits) though that functionality will be added later. We also do not support splitting into n pieces where n is a runtime value because we do not support run-time code generation yet. This also adds support in the frontend for an op to return multiple values.
-
Bhatu authored
For every tensor in the graph, we want to know it's 'taint'. Each tensor can have the possible taints: Client: Input to the ML model (eg: the image input). Server: The weights of the model. ClientXServer: A tensor that is dervied after operations on both client and server tensors. Secret_constant: A tensor that is a constant but declared as a secret. Public_constant: A tensor that is a constant but declared as public. The motivation behind this analysis is to insert optimized versions of multiplication. If one input is from server and the other from model we can call ElemWiseActModelVectorMult (optimized) otherwise we can insert a call to ElemWiseSecretSharedVectorMult Matmul also expects one of its inputs to have the 'Server' taint which this analysis identifies it.
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
Previously if there are mul like ops, the final output would be of scale = 2 * scaling_factor. Now we introduce a scale down (if required) so that the scale of the model output is = scaling_factor.
-
- Nov 25, 2020
-
-
Bhatu authored
Squarediff exposed a bug in codegen where both inputs to mul were same. Depending on the scale of the variable at that point, we sometimes do a scaledown of the inputs of multiplication so as to maintain precision. scaledown(a, scale) scaledown(b, scale) mul(a,b) But in this case both the inputs to mul were same so we were doing scaledown(a, scale) scaledown(a, scale) mul(a,a) This led to loss of precision. Now we just do: scaledown(a, scale) mul(a,a)
-
Bhatu authored
We do this as a simplification on the tensorflow graph itself. We transform SquaredDifference(a,b) into (a-b) * (a-b).
-
Bhatu authored
Remove the "" from attributes while parsing the graph def itself. eg: "\"dtype\"" -> "dtype" So we can directly refer to the attributes without adding double quotes to them.
-
Bhatu authored
-
- Aug 31, 2020
-
-
Deevashwer authored
-
- Aug 23, 2020
-
-
Deevashwer authored
-
- Aug 20, 2020
-
-
Deevashwer authored
-
Bhatu authored
-
Deevashwer authored
-
Bhatu authored
Before this all input was expected from the server. This was alright when we were compiling to cleartext as role doesn't matter in that case. Now we take client input from client.
-
- Aug 18, 2020
-
-
Bhatu authored
-
Bhatu authored
-
Deevashwer authored
-
- Aug 17, 2020
-
-
Bhatu authored
-
- Aug 15, 2020
-
-
Deevashwer authored
* CrypTFlow2 code * Some small corrections. * Updated License to 2020 * Updated repo README to include SCI (CrypTFlow2) and moved CrypTFlow2/ to SCI/ Co-authored-by:
Nishant Kumar <nishant.kr10@gmail.com>
-
- Aug 14, 2020
-
-
Bhatu authored
-
- Jul 26, 2020
-
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-
Bhatu authored
-- Implicit broadcasting: If any of the dims in the input is 1, the values along that dimension need to be broadcasted so as to match the output dimension. We add ternary operators that check at each iteration of that dim if the inputs have 1 as dim, and chosse array indexes appropriately. -- Add support for secure conv3d using multithreaded conv instead of matmul -- Fix bug related to overflow in send/receive message -- ONNXCompiler: Generate input and model parameter inputs separately for 3pc computation too.
-
- Jul 14, 2020
-
-
Bhatu authored
-
- Jul 11, 2020
-
-
Mayank Rathee authored
-
- Jul 08, 2020
-
-
Nishant Kumar authored
-
- Jul 02, 2020
-
-
Nishant Kumar authored
* With new compiler changes * After backend interface cleanup * Interface cleaned up * Removed funcSSCons and FLOAT_PRECISION fix * funcSSCons right fix1 * Cleanup of several things ; Athos and EzPC changes for 2PC * Residual changes * More changes * One left file
-
- Jun 24, 2020
-
-
Nishant Kumar authored
-
- Jun 18, 2020
-
-
Mayank authored
-
- Jun 15, 2020
-
-
Mayank authored
-