Paper Explained: Tree of Thoughts — Deliberate Problem Solving with Large Language Models

Isaac Kargar
11 min readJul 3

Note: Generative AI services are used as assistants in this blog post!

image generated by Bing Image Creator

This post is a summary of the paper Tree of Thoughts: Deliberate Problem Solving with Large Language Models.

Introduction

There’s been a lot of progress with language models recently. They’re doing a great job with many tasks. But, they’re kind of stuck in a pattern where they make decisions one step at a time, just moving along from start to finish. This isn’t the best when they need to think forward, go back to adjust something, or when the first few decisions are really important.

source

Now, there’s a new solution called “Tree of Thoughts” or ToT. It’s an improvement on the typical method used for giving prompts to language models, which is called the “Chain of Thought.” The ToT approach lets language models look at larger pieces of text, known as “thoughts.” This means that these models can take their time, consider different options, follow different paths of reasoning, and even evaluate their own decisions to determine what to do next. They can also plan for the future or go back to fix something when they need to make big decisions.

The tests have shown that ToT really improves how well language models can solve problems. This was especially true with three new tasks that required a lot of planning or searching: a game called 24, Creative Writing, and Mini Crosswords. For example, in the Game of 24, the old method only solved 4% of the tasks, but the new ToT method had a success rate of 74%! All the prompts used in the tests are available on their repo:

You might find it interesting to take a look!

System 1 vs System 2

Isaac Kargar

Co-Founder and CIO @ Resoniks | Ph.D. candidate at the Intelligent Robotics Group at Aalto University | https://kargarisaac.github.io/