Large-language model agents can reflect on their chain of thought to improve problem-solving performance.

Changing an LLM's sampling temperature from 0.0 to 1.0 does not affect problem-solving performance.

A concise chain of thought in LLMs reduces verbosity and cost without impacting problem-solving.

Curriculum learning improves the performance of some but not all RL agents in PacMan.

LLMs can be used to create human-readable explanations for decisions made by AI systems.

LLMs such as GPT-4 can be used to automate the creation of lecture slides from course material.

To see more of my research, please visit my Google Scholar and Semantic Scholar profiles.