We have a more fleshed out playground at https://promptfiddle.com as well.
BAML is a DSL for prompts, where prompts are modeled as functions. Our compiler transforms your LLM function declaration into the relevant API call, and will also parse the output for you.
We serialize using "type definitions" instead of using json schemas, since they are more efficient and easier to understand for models. We talk more about why here: https://www.boundaryml.com/blog/type-definition-prompting-ba...
We ran the Berkeley Function Calling Benchmark some months ago on our approach and achieved really good results https://www.boundaryml.com/blog/sota-function-calling?q=0