Sometimes, the site server has bugs with the Extended User Instruction. If you observe any anomolies, it usually fixes itself with a page refresh. Therefore, we strongly recommend you refresh this page immediately anyways.
Generating high-quality charts with Large Language Models presents significant challenges due to limited data and the high cost of scaling through human curation. 〈instruction, data, code〉 triplets are scarce and expensive to manually curate as their creation demands technical expertise. To address this scalability issue, we introduce a reference-free automatic feedback generator, which eliminates the need for costly human intervention. Our novel framework, C2, consists of (1) an automatic feedback provider (ChartAF) and (2) a diverse, reference-free dataset (ChartUIE-8K). Quantitative results are compelling: in our first experiment, 74% of respondents strongly preferred, and 10% preferred, the results after feedback. The second post-feedback experiment demonstrates that ChartAF outperforms nine baselines. Moreover, ChartUIE-8K significantly improves data diversity by increasing queries, datasets, and chart types by 5982%, 1936%, and 91%, respectively, over benchmarks. Finally, an LLM user study revealed that 94% of participants preferred ChartUIE-8K's queries, with 93% deeming them aligned with real-world use cases. Core contributions are available as open-source at an anonymized project site, with ample qualitative examples.
First, a ChartUIE-8K query is provided, then, the pre- and post-feedback chart is provided. Left is the pre-feedback chart generation. Right is the post-in-context tuned generation. The post-feedback section is a carousel, meaning you can press the arrows to compare ChartAF with baselines. These representative examples were taken directly from our human study.
First, a ChartUIE-8K query is provided, then, the pre- and post-TTS outcome is provided. Left is the pre-TTS chart generation. Right is the post-TTS generation. These representative examples were taken directly from our human study.
Source Code and Paper are under Apache-2.0 License © Authors