Paper Explained: Tree of Thoughts — Deliberate Problem Solving with Large Language Models
Note: Generative AI services are used as assistants in this blog post!
This post is a summary of the paper Tree of Thoughts: Deliberate Problem Solving with Large Language Models.
There’s been a lot of progress with language models recently. They’re doing a great job with many tasks. But, they’re kind of stuck in a pattern where they make decisions one step at a time, just moving along from start to finish. This isn’t the best when they need to think forward, go back to adjust something, or when the first few decisions are really important.
Now, there’s a new solution called “Tree of Thoughts” or ToT. It’s an improvement on the typical method used for giving prompts to language models, which is called the “Chain of Thought.” The ToT approach lets language models look at larger pieces of text, known as “thoughts.” This means that these models can take their time, consider different options, follow different paths of reasoning, and even evaluate their own decisions to determine what to do next. They can also plan for the future or go back to fix something when they need to make big decisions.
The tests have shown that ToT really improves how well language models can solve problems. This was especially true with three new tasks that required a lot of planning or searching: a game called 24, Creative Writing, and Mini Crosswords. For example, in the Game of 24, the old method only solved 4% of the tasks, but the new ToT method had a success rate of 74%! All the prompts used in the tests are available on their repo:
GitHub - ysymyth/tree-of-thought-llm: Tree of Thoughts: Deliberate Problem Solving with Large…
Code for paper Tree of Thoughts: Deliberate Problem Solving with Large Language Models. Also check its tweet thread in…
You might find it interesting to take a look!