
10 min read
Using ordinary LLMs such as ChatGPT or LLAMA for extracting relations between entities in text proves to be expensive and slow, but what if we used a Seq2Seq model, which are sort of a hybrid between GPTs and BERTs, the latter which is still used for relation extraction. Could we make it cheaper and faster?