does this have support for making batch inferences? does this have support for handling prompts that creates more than 75 tokens?