We can now take a token (a token = a word or a sentence or a large paragraph) and get the full embedding (refer sometimes as encoding) via the openAI API with “encodeToken”. If we want to do this for several tokens, we can do it in parallel with “encodeTokensInParallel”.
I think the next steps are: