Visual GPT

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models

Chenfei Wu Shengming Yin


Weizhen Qi Xiaodong Wang Zecheng Tang Nan Duan*
Microsoft Research Asia
{chewu, v-sheyin, t-weizhenqi, v-xiaodwang, v-zetang, nanduan}@microsoft.com
arXiv:2303.04671v1 [cs.CV] 8 Mar 2023

Abstract User Query Visual Foundation Models


please generate a red
flower conditioned on
ChatGPT is attracting a cross-field interest as it provides the predicted depth of
this image and then
a language interface with remarkable conversational com- make it like a cartoon,
step by step
petency and reasoning capabilities across many domains. Stable Pix2Pix
BLIP Diffusion
However, since ChatGPT is trained with languages, it is
currently not capable of processing or generating images
from the visual world. At the same time, Visual Foundation Prompt
Manager
Models, such as Visual Transformers or Stable Diffusion, ChatGPT ControlNet Detection
although showing great visual understanding and genera-
tion capabilities, they are only experts on specific tasks with Iterative Reasoning Outputs
one-round fixed inputs and outputs. To this end, We build Here you are.
a system called Visual ChatGPT, incorporating different What else
can I help
Visual Foundation Models, to enable the user to interact you?
with ChatGPT by 1) sending and receiving not only lan-
guages but also images 2) providing complex visual ques- Figure 1. Architecture of Visual ChatGPT.
tions or visual editing instructions that require the collabo- single language modality, while Visual Foundation Mod-
ration of multiple AI models with multi-steps. 3) providing els (VFMs) have shown tremendous potential in computer
feedback and asking for corrected results. We design a se- vision, with their ability to understand and generate com-
ries of prompts to inject the visual model information into plex images. For instance, BLIP Model [22] is an expert
ChatGPT, considering models of multiple inputs/outputs in understanding and providing the description of an image.
and models that require visual feedback. Experiments show Stable Diffusion [35] is an expert in synthesizing an image
that Visual ChatGPT opens the door to investigating the based on text prompts. However, suffering from the task
visual roles of ChatGPT with the help of Visual Founda- specification nature, the demanding and fixed input-output
tion Models. Our system is publicly available at https: formats make the VFMs less flexible than conversational
//github.com/microsoft/visual-chatgpt. language models in human-machine interaction.
Could we build a ChatGPT-like system that also supports
image understanding and generation? One intuitive idea
is to train a multi-modal conversational model. However,
1. Introduction building such a system would consume a large amount of
In recent years, the development of Large language mod- data and computational resources. Besides, another chal-
els (LLMs) has shown incredible progress, such as T5 [32], lenge comes that what if we want to incorporate modalities
BLOOM [36], and GPT-3 [5]. One of the most significant beyond languages and images, like videos or voices? Would
breakthroughs is ChatGPT, which is built upon Instruct- it be necessary to train a totally new multi-modality model
GPT [29], specifically trained to interact with users in a gen- every time when it comes to new modalities or functions?
uinely conversational manner, thus allowing it to maintain We answer the above questions by proposing a system
the context of the current conversation, handle follow-up named Visual ChatGPT. Instead of training a new multi-
questions, and correct answer produced by itself. modal ChatGPT from scratch, we build Visual ChatGPT
Although powerful, ChatGPT is limited in its ability directly based on ChatGPT and incorporate a variety of
to process visual information since it is trained with a VFMs. To bridge the gap between ChatGPT and these
VFMs, we propose a Prompt Manager which supports the
* Corresponding author. following functions: 1) explicitly tells ChatGPT the capa-

1
bility of each VFM and specifies the input-output formats; LLMs to process the visual information. To fuse the vi-
2) converts different visual information, for instance, png sion processing ability into such LLMs, several challenges
images, the depth images and mask matrix, to language for- are lying ahead since it is hard to train either large lan-
mat to help ChatGPT understand; 3) handles the histories, guage models or vision models, and the well-designed in-
priorities, and conflicts of different Visual Foundation Mod- structions [4, 55, 21] and cumbersome conversions [30, 52]
els. With the help of the Prompt Manager, ChatGPT can are required to connect different modalities. Although sev-
leverage these VFMs and receives their feedback in an it- eral works have explored leveraging the pre-trained LLMs
erative manner until it meets the requirements of users or to improve the performance on the vision-language (VL)
reaches the ending condition. tasks, those methods supported several specific VL tasks
As shown in Fig. 1, a user uploads an image of a yellow (from language to version or from version to language) and
flower and enters a complex language instruction “please required labeled data for training [38, 1, 22].
generate a red flower conditioned on the predicted depth of
this image and then make it like a cartoon, step by step”. 2.2. Pre-trained Models for VL tasks
With the help of Prompt Manager, Visual ChatGPT starts a
chain of execution of related Visual Foundation Models. In To better extract visual features, frozen pre-trained im-
this case, it first applies the depth estimation model to de- age encoders are adopted in the early works [9, 25, 54],
tect the depth information, then utilizes the depth-to-image and recent LiT [52] apply the CLIP pre-training [30] with
model to generate a figure of a red flower with the depth frozen ViT model [51]. From another perspective, exploit-
information, and finally leverages the style transfer VFM ing the knowledge from LLMs also counts. Following the
based on the Stable Diffusion model to change the style instruction of Transformer [39], pre-trained LLMs demon-
of this image into a cartoon. During the above pipeline, strate a powerful text understanding and generation capa-
Prompt Manager serves as a dispatcher for ChatGPT by pro- bility [31, 19, 37, 5], and such breakthroughs also benefit
viding the type of visual formats and recording the process the VL modelling [13, 14, 3, 49], where these works add
of information transformation. Finally, when Visual Chat- an extra adapter modules [17] in the pre-trained LLMs to
GPT obtains the hints of “cartoon” from Prompt Manager, align visual features to the text space. With the increased
it will end the execution pipeline and show the final result. number of model parameters, it is hard to train those pre-
In summary, our contributions are as follows: trained LLMs, thus more efforts have been paid to directly
leverage the off-the-shelf frozen pre-trained LLMs for VL
• We propose Visual ChatGPT, which opens the door tasks [12, 38, 8, 46, 50].
of combining ChatGPT and Visual Foundation Models
and enables ChatGPT to handle complex visual tasks; 2.3. Guidance of Pre-trained LLMs for VL tasks
• We design a Prompt Manager, in which we involve To deal with complex tasks, e.g., commonsense reason-
22 different VFMs and define the internal correlation ing [11], Chain-of-Thought (CoT) is proposed to elicit the
among them for better interaction and combination; multi-step reasoning abilities of LLMs [42]. More con-
cretely, CoT asks the LLMs to generate the intermediate
• Massive zero-shot experiments are conducted and answers for the final results. Existing study [57] have
abundant cases are shown to verify the understanding divided such a technique into two categories: Few-Shot-
and generation ability of Visual ChatGPT. CoT [56] and Zero-Shot-CoT [20]. For the few-shot set-
2. Related Works ting, the LLMs perform CoT reasoning with several demon-
strations [58, 41], and it turns out that the LLMs can ac-
2.1. Natural Language and Vision quire better abilities to solve complex problems. Fur-
Surrounded by various modalities (sound, vision, video, ther, recent studies [20, 47] have shown that LLMs can be
etc), language and vision are the two main mediums trans- self-improved by leveraging self-generated rationales under
mitting information in our life. There is a natural link be- the zero-shot setting. The above studies mainly focus on
tween the natural language and visions, and most questions a single modality, i.e., language. Recently, Multimodal-
require joint modeling of both two streams to produce the CoT [57] is proposed to incorporate language and vision
satisfied results [15, 26, 48], e.g., visual question answer- modalities into a two-stage framework that separates ra-
ing (VQA) [2] takes an image and one corresponding ques- tionale generation and answer inference. However, such a
tion as input and requires to generate an answer according method merely shows superiority under specific scenarios,
to the information in the given image. Owing to the success i.e., ScienceQA benchmark [28]. In a nutshell, our work
of large language models (LLMs) like InstructGPT [29], extends the potentiality of CoT to massive tasks, including
one can easily interact with the model or obtain feedback but not limited to text-to-image generation [27], image-to-
in the natural language format, but it is incapable for those image translation [18], image-to-text generation [40], etc.
: replace the sofa in this image with a desk and
Visual Foundation
then make it like a water-color painting
User Query
2db9a50a.png Models

System History of
Principles Dialogue ChatGPT
: Received.
Determine 1: Use VFM? Yes
: replace the sofa in this image Execute 1: Replace Something From The
Prompt Manager
with a desk and then make it like Photo → Inputs: (2db9a50a.png, sofa, desk)
Intermediate Answer
a water-color painting
483d_replace-something_2db9a50a_2db9a50a.png
ChatGPT
:483d_replace-
something_2db9a5 ChatGPT
0a_2db9a50a.png No Use Determine 2: Use VFM? Yes
Output Execute 2: Instruct Image Using Text→ Inputs:
VFM? (483d_replace-something_2db9a50a_2db9a50a.png,
f4b1_pix2pix_483d
make it like a water-color painting)
_2db9a50a.png Yes
Intermediate Answer :
VFMs Execute f4b1_pix2pix_483d_2db9a50a.png

: What color is the wall in the


picture History of Intermediate
ChatGPT
Reasoning Answer
The wall in the picture is Determine 3: Use VFM? No
blue. Outputs : f4b1_pix2pix_483d_2db9a50a.png

Figure 2. Overview of Visual ChatGPT. The left side shows a three-round dialogue, The middle side shows the flowchart of how Visual
ChatGPT iteratively invokes Visual Foundation Models and provide answers. The right side shows the detailed process of the second QA.

3. Visual ChatGPT – History of Dialogue H<i : We define the dia-


logue history of i-th round of conversation as the string
Let S = {(Q1 , A1 ), (Q2 , A2 ), ..., (QN , AN )} be a di- concatenation of previous question answer pairs, i.e,
alogue system with N question-answer pairs. To get the {(Q1 , A1 ), (Q2 , A2 ), · · · , (Qi−1 , Ai−1 )}. Besides, we
response Ai from the i-th round of conversation, a series truncate the dialogue history with a maximum length
(j)
of VFMs and intermediate outputs Ai from those mod- threshold to meet the input length of ChatGPT model.
els are involved, where j denotes the output from the j-th
VFM (F) in i-th round. More concretely, handling with – User query Qi : In visual ChatGPT, query is a general
(j) term, since it can include both linguistic and visual queries.
Prompt Manager M, the format of Ai is constantly mod-
ified to meet the input format of each F. In the end, the For instance, Fig. 1 shows an example of a query containing
(j)
system output Ai if it is denoted as the final response, and both the query text and the corresponding image.
no more VFM is executed. Eq. (1) provides a formal defi- (<j)
– History of Reasoning Ri : To solve a complex
nition of Visual ChatGPT:
question, Visual ChatGPT may require the collaboration of
(<j)
multiple VFMs. For the i-th round of conversation, Ri is
(j+1) all the previous reasoning histories from j invoked VFMs.
Ai = ChatGP T (M(P), M(F), M(H<i ), M(Qi ),
(<j) (j)
M(Ri ), M(F(Ai ))) (1) – Intermediate Answer A(j) : When handling a com-
plex query, Visual ChatGPT will try to obtain the final
– System Principle P: System Principle provides basic answer step-by-step by invoking different VFMs logically,
rules for Visual ChatGPT, e.g., it should be sensitive to the thus producing multiple intermediate answers.
image filenames, and should use VFMs to handle images
instead of generating the results based on the chat history. – Prompt Manager M: A prompt manager is designed
– Visual Foundation Model F: One core of Visual to convert all the visual signals into language so that Chat-
ChatGPT is the combination of various VFMs: F = GPT model can understand. In the following subsections,
{f1 , f2 , ..., fN }, where each foundation model fi contains we focus on introducing how M manages above different
(j)
a determined function with explicit inputs and outputs. parts: P, F, Qi , F(Ai ).
Access to Visual ChatGPT can invoke different VFMs to indirectly
VFM understand pictures.

Prompt Filename Visual ChatGPT is very strict to the file name and will
Manager Sensitivity never fabricate nonexistent files.
Thought: Do I need to use a tool? Yes
System Reasoning Action: the action to take, should be one of [{tool_names}]
Principles Format Action Input: the input to the action

VFMS Name Answer Question About The Image


Document useful for when you need an answer for a question
Usage based on an image. like what is the background color of
the last image.
Inputs/ The input to this tool should be a comma seperated
Outputs string of two, representing the image_path and the
question.
BLIP VQA
Example image/gd362rs35.png,What sort of vehicle uses this item?
Prompt (Optional)

ChatGPT
Manager
Name Instruct Image Using Text
Useful for when you want to the style of the image to
Usage be like the text. like make it look like a painting. or
make it like a robot.
Inputs/ The input to this tool should be a comma separated
Pix2Pix
Outputs string of two, representing the image_path and the text.
... Example image/5b227bce.png, make it look like an oil painting
(Optional)

History Of Dialogue Prompt Manager

Unique Visual ChatGPT generates a unique filename


Prompt Filename with a universally unique identifier (UUID).
Manager
User
Force VFM Thought: Do I need to use a tool?
Query Thinking

History Of Reasoning Prompt Manager

image/{Name}_{Operation}_{Prev_Name}_{Org_Name}
Chained Name: a new uuid name for the generated image,
Prompt Operation: the operation
Filename Prev_Name: the input image
VFMs Manager
Execute Org_Name: the most original images' name

Ask for When the user’s command is


ambiguous, Visual ChatGPT should ask the users for
Details more details to help better leverage VFMs.

Figure 3. Overview of Prompt Manager. It coverts all non-language signals into language so that ChatGPT can understand.
3.1. Prompt Managing of System Principles M(P) Table 1. Foundation models supported by Visual ChatGPT.
Remove Objects from Image [10, 35]
Visual ChatGPT is a system that integrates different Replace Objects from Image [10, 35]
VFMs to understand visual information and generation cor- Change Image by the Text [35]
responding answers. To accomplish this, some system prin- Image Question Answering [23]
ciples need to be customized, which are then transferred Image-to-Text [23] Text-to-Image [35]
into prompts that ChatGPT can understand. These prompts Image-to-Edge [45] Edge-to-Image [53]
serve several purposes, including: Image-to-Line [16] Line-to-Image [53]
• Role of Visual ChatGPT Visual ChatGPT is designed Image-to-Hed [44] Hed-to-Image [53]
to assist with a range of text and visual-related tasks, Image-to-Seg [24] Seg-to-Image [53]
such as VQA, image generation, and editing. Image-to-Depth [34, 33] Depth-to-Image [53]
Image-to-NormalMap [34, 33] NormalMap-to-Image [53]
• VFMs Accessibility Visual ChatGPT has access to a
Image-to-Sketch [44] Sketch-to-Image [53]
list of VFMs to solve various VL tasks. The decision of
Image-to-Pose [6] Pose-to-Image [53]
which foundation model to use is entirely made by the
ChatGPT model itself, thus making it easy to support
new VFMs and VL tasks. 3.2. Prompt Managing of Foundation Models M(F)
• Filename Sensitivity Visual ChatGPT accesses image Visual ChatGPT is equipped with multiple VFMs to han-
files according to the filename, and it is crucial to use dle various VL tasks. Since these different VFMs may share
precise filenames to avoid ambiguity since one round some similarities, e.g., the replacement of objects in the im-
of conversation may contain multiple images and their age can be regarded as generating a new image, and both
different updated versions and the misuse of filenames Image-to-Text (I2T) task and Image Question Answering
will lead to the confusion about which image is cur- (VQA) task can be understood as giving the response ac-
rently being discussed. Therefore, Visual ChatGPT is cording to the provided image, it is critical to distinguish
designed to be strict about filename usage, ensuring among them. As shown in Fig. 3, the Prompt Manager
that it retrieves and manipulates the correct image files. specifically defines the following aspects to help Visual
• Chain-of-Thought As shown in Fig. 1, to cope with ChatGPT accurately understand and handle the VL tasks:
one seemingly simple command may require multi- • Name The name prompt provides an abstract of the
ple VFMs, e.g., the query of “generate a red flower overall function for each VFM, e.g., answer question
conditioned on the predicted depth of this image and about the image, and it not only helps Visual ChatGPT
then make it like a cartoon ” requires depth estimation, to understand the purpose of VFM in a concise manner
depth-to-image and the style transfer VFMs. To tackle but also severs as the entry to VFM.
more challenging queries by decomposing them into
subproblems, CoT is introduced in Visual ChatGPT to • Usage The usage prompt describes the specific sce-
help decide, leverage and dispatch multiple VFMs. nario where the VFM should be used. For example, the
Pix2Pix model [35] is suitable for changing the style
• Reasoning Format Strictness Visual ChatGPT must of an image. Providing this information helps Visual
follow strict reasoning formats. Thus we parse the in- ChatGPT make informed decisions about which VFM
termediate reasoning results with the elaborate regex to use for the particular task.
matching algorithms, and construct the rational input
format for ChatGPT model to help it determine the • Inputs/Outputs The inputs and the outputs prompt
next execution, e.g., triggering a new VFM or return- outlines the format of inputs and outputs required by
ing the final response. each VFM since the format can vary significantly and
it is crucial to provide clear guideline for Visual Chat-
• Reliability As a language model, Visual ChatGPT GPT to execute the VFMs correctly.
may fabricate fake image filenames or facts, which can
make the system unreliable. To handle such issues, • Example(Optional) The example prompt is optional,
we design prompts that require Visual ChatGPT to be but it can be helpful for Visual ChatGPT to better un-
loyal to the output of the vision foundation models derstand how to use particular VFM under the specific
and not fabricate image content or filenames. Besides, input template and deal with more complex queries.
the collaboration of multiple VFMs can increase sys-
3.3. Prompt Managing of User Querie M(Qi )
tem reliability, thus the prompt we construct will guide
ChatGPT to leverage VFMs preferentially instead of Visual ChatGPT supports a variety of user queries, in-
generating results based on conversation history. cluding languages or images, simple or complex ones, and
the reference of multiple images. Prompt Manager handles folder, which hints the following strings represent-
user queries in the following two aspects: ing an image name. Then, the image is named as
“{Name} {Operation} {Prev Name} {Org Name}”,
• Generate Unique Filename Visual ChatGPT can han- where {Name} is the UUID name mentioned
dle two types of image-related queries: those that in- above, with {Operation} as the operation name,
volve newly uploaded images and those that involve {Prev Name} as the input image unique identifier, and
reference to existing images. For newly uploaded im- {Org Name} as the original name of the image up-
ages, Visual ChatGPT generates a unique filename loaded by users or generated by VFMs. For instance,
with a universally unique identifier (UUID) and adds “image/ui3c edge-of o0ec nji9dcgf.png” is a canny
a prefix string ”image” representing the relative direc- edge image named “ui3c” of input “o0ec”, and the
tory, e.g., ”image/{uuid}.png”. Although the newly original name of this image is “nji9dcgf”. With such a
uploaded image will not be fed into ChatGPT, a fake naming rule, it can hint ChatGPT of the intermediate
dialogue history is generated with a question stating result attributes,i.e., image, and how it was generated
the image’s filename and an answer indicating that the from a series of operations.
image has been received. This fake dialogue history
assists in the following dialogues. For queries that in- • Call for More VFMs One core of Visual ChatGPT is
volve reference to existing images, Visual ChatGPT that it can automatically call for more VFMs to finish
ignores the filename check. This approach has been the user’s command. More concretely, we make the
proven beneficial since ChatGPT has the ability to un- ChatGPT keep asking itself whether it needs VFMs
derstand fuzzy matching of user queries if it does not to solve the current problem by extending one suffix
lead to ambiguity, e.g., UUID names. “Thought: ” at the end of each generation.

• Ask for More Details When the user’s command is


• Force VFM Thinking To ensure the successful trig-
ambiguous, Visual ChatGPT should ask the users for
ger of VFMs for Visual ChatGPT, we append a suffix
more details to help better leverage VFMs. This de-
prompt to (Qi ): “Since Visual ChatGPT is a text lan-
sign is safe and critical since the LLMs are not per-
guage model, Visual ChatGPT must use tools to ob-
mitted to arbitrarily tamper with or speculate about the
serve images rather than imagination. The thoughts
user’s intention without basis, especially when the in-
and observations are only visible for Visual ChatGPT,
put information is insufficient.
Visual ChatGPT should remember to repeat important
information in the final response for Human. Thought:
Do I need to use a tool?”. This prompt serves two pur-
4. Experiments
poses: 1) it prompts Visual ChatGPT to use foundation 4.1. Setup
models instead of relying solely on its imagination; 2)
it encourages Visual ChatGPT to provide specific out- We implement the LLM with ChatGPT [29] (Ope-
puts generated by the foundation models, rather than nAI “text-davinci-003” version), and guide the LLM with
generic responses such as “here you are”. LangChain [7]1 . We collect foundation models from Hug-
gingFace Transformers [43]2 , Maskformer [10]3 and Con-
3.4. Prompt Managing of Foundation Model Out- trolNet [53]4 . The fully deployment of all the 22 VFMs re-
quires 4 Nvidia V100 GPUs, but users are allowed to deploy
puts M(F(A(j)
i )) fewer foundation models to save GPU resources flexibly.
For the intermediate outputs from different VFMs The maximum length of chat history is 2,000 and excessive
(j) tokens are truncated to meet the input length of ChatGPT.
F(Ai ), Visual ChatGPT will implicitly summarize and
feed them to the ChatGPT for subsequent interaction, i.e.,
4.2. A full Case of Multiple Rounds Dialogue
calling other VFMs for further operations until reaching the
ending condition or giving the feedback to the users. The Fig. 4 shows a 16-round multi-modal dialogue case of
inner steps can be summarized below: Visual ChatGPT. In this case, the user asks for both text
and image questions, and Visual ChatGPT responses with
• Generate Chained Filename Since the interme- both text and images. The dialogue involves a discussion
diate outputs of Visual ChatGPT will become the of multiple images, processing with multiple fundamental
inputs for the next implicit conversational round, models, and handling questions that require multi-steps.
we should make those outputs more logical to help 1 https://github.com/hwchase17/langchain
the LLMs better understand the reasoning process. 2 https://huggingface.co/docs/diffusers/index
Specifically, the image generated from the Visual 3 https://github.com/facebookresearch/MaskFormer

Foundation Models are saved under the“image/” 4 https://github.com/lllyasviel/ControlNet


: Hello, who are you? : Good. Now help me remove the cup too.

: Hi there! I'm Visual ChatGPT. I'm here to help you with : The image/6c04_remove-
a wide range of text and visual related tasks. How can I help something_7d56_dcd3af54.png now contains a
you? black table with a blue background and no cup.

: I like drawing, but I'm not good at drawing, image/6c04_remove-


can you help me? like drawing an apple. something_7d56_dcd3af54.png

: Sure, I can help you draw an apple. Here is the image I : Can you replace the background of
generated for you: image/6c04_remove-something_7d56_dcd3af54.png with
an old man sitting with a lot of vicissitudes in his face?
image/a3453226.png
: The new image is saved as

image/5cd1_replace-
: Draw an image: something_6c04_dcd3af54.png

: This picture looks good. Can you describe


the color of the old man's clothes?

: Received. What do you want to know / operate it? : The old man's clothes are blue.
You can use image/dcd3af54.png to represent this
image in our dialogue.
: Correct. I want you to detect the depth map of the
image/5cd1_replace-something_6c04_dcd3af54 first, and
: The image/dcd3af54.png is my sketch of an apple and a show the depth image, then generate a new image based on
drinking glass, can you please help me to improve it? this depth map, and finally describe the final image.
: I generated a new image based on your sketch. The new image
is saved as: : The image/38a5_depth2image_5cd1_dcd3af54.png, which
contains 1 chair, 1 person. The person is wearing a white shirt and has
image/569b_scribble2image_dcd3af54 a beard. The depth image of the of image/5cd1_replace-
_dcd3af54.png something_6c04_dcd3af54 is image/d149_depth_5cd1_dcd3af54.png.
image/38a5_dep
: Looks good. Can you make the image/d149_depth_ th2image_5cd1
image/569b_scribble2image_dcd3af54_dcd3af54.png 5cd1_dcd3af54.png _dcd3af54.png
into a watercolor painting?
: Ok, forget apples and old people. I want you to
: I have used the Instruct Image Using Text tool to help me generate a picture of a girl and a puppy
generate a watercolor painting based on your input. The new playing on the beach.
image is saved as:
: I have generated a picture of a girl and
image/0849_pix2pix_569b_dcd3af54.png
a puppy playing on the beach. The image is

image/ab0da279.png
: Wow this is very beautiful. Can you tell me what color
this background is?

: The background color of the : It looks very nice. Can you replace this
image/0849_pix2pix_569b_dcd3af54.png is blue. puppy with a kitten?

: Correct answer. Can you remove this apple in this picture? : I have replaced the puppy with a kitten in the
and then describe the image? image. The new image is
: The image/2ca6_remove-something_0849_dcd3af54.png image/d1e1_replace-
now contains a drinking glass with a blue background. something_ab0da279_ab0da279.png

image/2ca6_remove-
something_0849_dcd3af54.png : Well done. Can you turn this image/d1e1_replace-
something_ab0da279_ab0da279.png into a pencil drawing?

: There are still some shadows on the table in the : I have generated a pencil drawing of a girl and a kitten playing
image/2ca6_remove-something_0849_dcd3af54.png, on the beach. The image is
can you help me to replace the table with a black table? image/6c71_pix2pix_
d1e1_ab0da279.png
: I have replaced the table with a black table. The image is
saved as : Thanks very much. You did a good job.
Goodbye!
image/7d56_replace-
something_2ca6_dcd3af54.png : You're welcome! Have a great day!

Figure 4. Multiple rounds of dialogue between humans and Visual ChatGPT. In the dialogues, Visual ChatGPT can understand human
intents, support the language and image inputs, and accomplish complex visual tasks such as generation, question, and editing.
Filename Sensitivity Reasoning Format Strictness
: Can you generate a : Can you generate a
image/ image/ chinese water-ink chinese water-ink
555deb89.png 5180fac5.png painting of a landscape? painting of a landscape?

: Received : Received Output of LLM: : Sure, I have


Yes, Visual ChatGPT generated a Chinese
: generate a figure of a : generate a figure of a needs to use a tool to water-ink painting of a
lovely cat lovely cat generate a Chinese landscape for you. The
water-ink painting of a image file name is
: :
landscape. Visual
image/ image/
ChatGPT will use the
afbd8914.png 079109ff.png.
Generate Image From
User Input Text tool to
generate the image. The image/71f23773.png.
: generate depth image : generate depth image input for this tool should
from the figure I gave you from the figure I gave you be "Chinese water-ink
: : painting of a landscape".
image/ image/ Visual ChatGPT will
e2dd_depth f89c_depth generate an image file
_afbd8914_ _5180fac5_ named "image/xxx.png"
afbd8914.png 5180fac5.png and provide the file
name in the response.

Failed to refer by considering Failed to start by considering reasoning


image filename filename sensitivity reasoning format strictness

Reliability Chain of Thought


: generate a figure of a : generate a figure of a : draw a figure of a big : draw a figure of a big
rose rose apple then use a pear to apple then use a pear to
replace it, show the apple replace it, show the apple
: image and the pear image image and the pear image
image/ image/
4e29100e.png. 2cb15c3d.png. : Here is the image of
Output of LLM: a big apple:
Action: Replace
:Can you tell me : Can you tell me Something From The
something more about something more about Photo
sunflowers sunflowers Action Input:
image/apple.png, apple,
Sunflowers are a Sunflowers are pear
type of flowering plant annual plants native to image/226bb158.png and
in the daisy family. …. the Americas. They have here is the image of a
Sunflowers are known large, bright yellow pear replacing the apple:
for their bright yellow flowers with a dark
petals and large, dark center. Sunflowers are
center… popular …
:Can you tell me some :Can you tell me some
details about this image details about this image image/dce6_replace-
This image contains The image contains a something_226bb158_22
a rose with yellow petals rose flower 6bb158.png
and a large, dark center.

Failed to descript by considering Failed to start chain by considering


Image reliability of thought chain of thought
Figure 5. Case study of prompt managing of system principles. We qualitatively analyze the four proposals: file name sensitivity, reasoning
format strictness, reliability, and chain of thoughts. The top-left shows whether emphasizing the file name sensitivity in M(P) affects the
file reference accuracy. Further parsing cannot be performed for the top-right points without reasoning format strictness. The bottom-left
shows the difference in whether to tell Visual ChatGPT to be loyal to tool observation rather than faking image content. The bottom-right
shows emphasizing the ability to use tools in a chain will help the decision.
Name Usage
: : : generate a realistic : generate a realistic
image/ image/ photo of a lovely dog photo of a lovely dog
3ec084 a5e451 : I have : I have
71.png 8d.png generated a generated a
realistic photo realistic photo
: Received. What do : Received. What do of a lovely dog, of a lovely
you want to know / you want to know / the image file dog, the file
operate it? operate it? name is name is
: can you predict the : can you predict the image/10daf4f image/5e942
depth image? depth image? 8.png. 285.png.
: make the above : make the above
: Observation: is not a : I have used the tool image like oil painting image like oil painting
valid tool, try another to predict the depth
one. image of the : I have : I have
Depth Detection On image/a5e4518d.png. generated an instructed the
Image is not a valid tool, The depth image is saved image of a lovely image to look
try another one. as dog in an oil like an oil
Generate Depth Map On painting style, painting, the
Image is not a valid tool, the image file new file name
try another one. name is is
I'm sorry, I cannot predict image/93f3_repla image/d963_p
the depth image of image/a4b5_depth_a5 ce- ix2pix_5e9422
image/3ec08471.png. e4518d_a5e4518d.png. something_10daf 85_5e942285.
4f8_10daf4f8.png png.

Failed to trigger correct by adding name of Triggered wrong by adding usage of


foundation model foundation model foundation model foundation model

Inputs/Outputs Example
: generate a realistic : generate a realistic : :
photo of a lovely dog photo of a lovely dog image/ image/
: : I have : I have 477afd fceeb3
generated a generated a fb.png 35.png
lovely dog lovely dog : Received. : Received.
image for you, image for you,
the file name the file name : what is in this image : what is in this image
is image/ is image/ : contains a panda bear : contains a panda bear
6450a01f.png. 5e942285.png. sitting on a bamboo stick. sitting on a bamboo stick.
: detect the edges : detect the edges
: draw a cat to replace : draw a cat to replace
the dog in this image the dog in this image : :
: I have replaced the image/508d_e image/9b88_e
Output of LLM: dog in the image with a dge_477afdfb_ dge_fceeb335_
Action: Replace cat, the new image file 477afdfb.png. fceeb335.png.
Something From The name is
: Can you imagine : Can you imagine
Photo
the original image of the original image of
Action Input:
this edge map? this edge map?
image/6450a01f.png,cat
: :
image/65fc_replace- Image/95ad_can image/bd20_can
something_4ee1a8d2_4ee ny2image_508d ny2image_9b88
1a8d2.png. _477afdfb.png _fceeb335.png

Failed to trigger correct by adding inputs/outputs without example of by adding example of


foundation model of foundation model foundation model foundation model
Figure 6. Case study of prompt managing of foundation models. We qualitatively analyze the four proposals: Name, Usage, Inputs/Outputs,
and Example. The top-left shows that Visual ChatGPT will guess the tool name and then fails to use the correct tool without the tool name.
The top-right shows that when the usage of the tool name is missing or unclear, it will call other tools or encounters an error. The bottom-
left shows that the lack of inputs/outputs format requirements will lead to wrong parameters. The bottom-right shows that the example
sometimes is optional because the ChatGPT is able to summarize the historical information and human intents to use the correct tool.
Generate Unique Filename Force FVM Thinking
: generate a picture of :generate a picture of
: image/ : image/ fancy car​ fancy car​
bird.png e119bb20.png
: :
Here is a Here is a
: Received : Received picture of a picture of a
fancy car:​ fancy car:​
image/ image/
: image/ : image/ f98df310.png f98df310.png
bird.png 9e4e4077.png
: it’s a poster, isn’t it? : it’s a poster, isn’t it?
: Received : Received : No, it’s not a poster. : Yes, this is a poster.​
: make image/bird.png : make image/ It’s a picture of a fancy car.
black and white 9e4e4077.png black and : show me the scribble
: show me the scribble
white
: I have : Here is
generated a : The image
: Since ... for Human. the scribble
black-and- has been
Thought: Do I need to use a of it:
white image of changed to
tool? Yes Action: Sketch On image/5243
a bird for you. black and
Image Action Input: image/ _scribble_
The file name is white. The new
f98df310.png f98df310 _
image/pix2pix_ image is
image/c488_pix2pix_9e4 f98df310.pn
bird.png g
e4077_9e4e4077.png.
Failed to distinguish by generating Failed to invoke by forcing FVM
filenames unique filename foundation models thinking

Generate Chained Filename Ask for More Details


:…… :…… : : the blue one on the
…… …… left

: generate an image based : generate an image based


: The blue cup on the
on image/8s1fon0i.png on image/8855_depth_ image/6aee4bb0.png left in the image/
a7d92766_a7d92766.png.
: I have 6aee4bb0.png
: I have : Received
generated an image
generated …… The
based on : replace the cup in
new image is saved
image/8s1fon0i.png.
as image/6aee4bb0.png
The new image is
image/a4d4_depth with a fruit
saved as has been replaced with a
2image_8855_a7d9
image/8d43193d.png fruit. The new image is
2766.png. : Please specify which
: based on the previous : based on the previous cup you would like to
dialogue, can you conclude dialogue, can you conclude replace in the image
the rule that how are these the rule that how are these
images named? images named?
image/ac30_replace-
: Yes, Visual ChatGPT is : The image name is something_6aee4bb0_6a
very strict to the file name formed as “image/xxx.png”, ee4bb0.png.
image/6aee4bb0.png and
correctness and will never and the xxx part is a
fake a file name if not exists.
try again.
combination of the tool
Visual ChatGPT will remember name and the image_path of
to provide the image file name the previous tool observation.
loyally if it’s provided in the For example, ……
last tool observation.

Failed to connect by generating by asking for more details to handle


related image files chained filename ambiguous questions from users
Figure 7. Case study of prompt managing of the user query and model output. We qualitatively analyze the four proposals: unique file
names, force VFM thinking, chained file names, and ask for more details. The top-left shows that unique file names avoid overwriting. The
top-right shows force VFM thinking encourages tool invoking and strict thinking format. The bottom-left shows chained file naming helps
understand files and can be successfully observed and concluded by Visual ChatGPT. The bottom-right shows Visual ChatGPT is able to
detect ambiguous references and ask for more details.
4.3. Case Study of Prompt Manager • Dependence on ChatGPT and VFMs Visual Chat-
GPT relies heavily on ChatGPT to assign tasks and on
Case Study of prompt managing of system principles VFMs to execute them. The performance of Visual
is analyzed in Fig. 5. To validate the effectiveness of our ChatGPT is thus heavily influenced by the accuracy
system principle prompts, we remove different parts from it and effectiveness of these models.
to compare model performance. Each removal will result in
different capacity degradation.
• Heavy Prompt Engineering Visual ChatGPT requires
Case Study of prompt managing of foundation mod- a significant amount of prompt engineering to con-
els is analyzed in Fig. 6. The name of the VFM is the most vert VFMs into language and make these model de-
important and needs to be clearly defined. When the name is scriptions distinguishable. This process can be time-
missing or ambiguous, Visual ChatGPT will guess it many consuming and requires expertise in both computer vi-
times until it finds an existing VFM, or encounters an error, sion and natural language processing.
as the top-left figure shows. The VFM usage should clearly
describe the specific scenario where a model should be used • Limited Real-time Capabilities Visual ChatGPT is
to avoid the wrong responses. The top-right figure shows designed to be general. It tries to decompose a com-
that the style transfer is mishandled to the replacement. The plex task into several subtasks automatically. Thus,
input and output format should be prompted accurately to when handling a specific task, Visual ChatGPT may
avoid parameter errors, as shown in the bottom-left. The invoke multiple VFMs, resulting in limited real-time
example prompt can help the model deal with complex us- capabilities compared to expert models specifically
ages but is optional. As shown in the bottom-right figure, trained for a particular task.
although we delete the example prompt, ChatGPT can also
summarize the dialogue history and human intents to use
• Token Length Limitation The maximum token length
the correct VFM. The complete visual foundation model
in ChatGPT may limit the number of foundation mod-
prompts is shown in Appendix A.
els that can be used. If there are thousands or millions
Case Study of prompt managing of user query is an- of foundation models, a pre-filter module may be nec-
alyzed in Fig7 upper part. The top-left figure shows that essary to limit the VFMs fed to ChatGPT.
without image file unique naming, newly uploaded image
file might be renamed to avoid overwritten and result in • Security and Privacy The ability to easily plug and
wrong reference. As shown in the top-right figure, by mov- unplug foundation models may raise security and pri-
ing the thought guidance from M(P) to M(Q) and making vacy concerns, particularly for remote models ac-
it spoken in Visual ChatGPT’s voice as a force thinking, in- cessed via APIs. Careful consideration and auto-
voking more VFM is emphasized rather than imagination matic check must be given to ensure that sensitive data
based on textual context as compared in Q2 . By forcing Vi- should not be exposed or compromised.
sual ChatGPT to say “Thought: Do I need to use a tool?”,
M(Q) makes it easier to pass regex match correctly. In
contrast, without force thinking, A3 may wrongly generate 6. Conclusion
the end of thoughts token and directly consider all of its
In this work, we propose Visual ChatGPT, an open sys-
ChatGPT outputs as the final response.
tem incorporating different VFMs and enabling users to in-
Case Study of prompt managing of model outputs is teract with ChatGPT beyond language format. To build
analyzed in Fig7 bottom part. The bottom-left picture com- such a system, we meticulously design a series of prompts
pares the performance of removing and keeping the chained to help inject the visual information into ChatGPT, which
naming rule. With the chained naming rule, Visual Chat- thus can solve the complex visual questions step-by-step.
GPT can recognize the file type, trigger the correct VFM, Massive experiments and selected cases have demonstrated
and conclude the file dependency relationship naming rule. the great potential and competence of Visual ChatGPT for
It shows that the chained naming rule does help Visual Chat- different tasks. Apart from the aforementioned limitations,
GPT to understand. The bottom-right picture gives an ex- another concern is that some generation results are unsat-
ample of asking for more details when the item inference is isfied due to the failure of VFMs and the instability of the
ambiguous, which also indicates the safety of our system. prompt. Thus, one self-correction module is necessary for
checking the consistency between execution results and hu-
5. Limitations man intentions and accordingly making the corresponding
editing. Such self-correction behavior can lead to more
Although Visual ChatGTP is a promising approach for complex thinking of the model, significantly increasing the
multi-modal dialogue, it has some limitations, including: inference time. We will solve such an issue in the future.
References and-language representation learning. Advances in Neural
Information Processing Systems, 33:6616–6628, 2020.
[1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine
[15] Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth
Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch,
Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. Semantic
Katherine Millican, Malcolm Reynolds, et al. Flamingo: a
compositional networks for visual captioning. In Proceed-
visual language model for few-shot learning. In Advances in
ings of the IEEE conference on computer vision and pattern
Neural Information Processing Systems, 2022.
recognition, pages 5630–5639, 2017.
[2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret
[16] Geonmo Gu, Byungsoo Ko, SeoungHyun Go, Sung-Hyun
Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh.
Lee, Jingeun Lee, and Minchul Shin. Towards light-weight
Vqa: Visual question answering. In Proceedings of the IEEE
and real-time line segment detection. In Proceedings of the
international conference on computer vision, pages 2425–
AAAI Conference on Artificial Intelligence, volume 36, pages
2433, 2015.
726–734, 2022.
[3] Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu,
Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, [17] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna
and Furu Wei. Vlmo: Unified vision-language pre- Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona
training with mixture-of-modality-experts. arXiv preprint Attariyan, and Sylvain Gelly. Parameter-efficient transfer
arXiv:2111.02358, 2021. learning for nlp. In International Conference on Machine
Learning, pages 2790–2799. PMLR, 2019.
[4] Tim Brooks, Aleksander Holynski, and Alexei A Efros. In-
structpix2pix: Learning to follow image editing instructions. [18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A
arXiv preprint arXiv:2211.09800, 2022. Efros. Image-to-image translation with conditional adver-
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub- sarial networks. In Proceedings of the IEEE conference on
biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan- computer vision and pattern recognition, pages 1125–1134,
tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- 2017.
guage models are few-shot learners. Advances in neural in- [19] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina
formation processing systems, 33:1877–1901, 2020. Toutanova. Bert: Pre-training of deep bidirectional trans-
[6] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. formers for language understanding. In Proceedings of
Realtime multi-person 2d pose estimation using part affinity NAACL-HLT, pages 4171–4186, 2019.
fields. In Proceedings of the IEEE conference on computer [20] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka
vision and pattern recognition, pages 7291–7299, 2017. Matsuo, and Yusuke Iwasawa. Large language models are
[7] Harrison Chase. LangChain, 10 2022. zero-shot reasoners. In Advances in Neural Information Pro-
[8] Jun Chen, Han Guo, Kai Yi, Boyang Li, and Mohamed El- cessing Systems, 2022.
hoseiny. Visualgpt: Data-efficient adaptation of pretrained [21] Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip HS
language models for image captioning. In Proceedings of Torr. Manigan: Text-guided image manipulation. In Pro-
the IEEE/CVF Conference on Computer Vision and Pattern ceedings of the IEEE/CVF Conference on Computer Vision
Recognition, pages 18030–18040, 2022. and Pattern Recognition, pages 7880–7889, 2020.
[9] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, [22] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Blip-2: Bootstrapping language-image pre-training with
Universal image-text representation learning. In Computer frozen image encoders and large language models. arXiv
Vision–ECCV 2020: 16th European Conference, Glasgow, preprint arXiv:2301.12597, 2023.
UK, August 23–28, 2020, Proceedings, Part XXX, pages [23] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi.
104–120. Springer, 2020. Blip: Bootstrapping language-image pre-training for uni-
[10] Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per- fied vision-language understanding and generation. In In-
pixel classification is not all you need for semantic segmen- ternational Conference on Machine Learning, pages 12888–
tation. Advances in Neural Information Processing Systems, 12900. PMLR, 2022.
34:17864–17875, 2021. [24] Kunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu
[11] Ernest Davis and Gary Marcus. Commonsense reasoning Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Uni-
and commonsense knowledge in artificial intelligence. Com- fying convolution and self-attention for visual recognition.
munications of the ACM, 58(9):92–103, 2015. arXiv preprint arXiv:2201.09450, 2022.
[12] Constantin Eichenberg, Sidney Black, Samuel Weinbach, [25] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei
Letitia Parcalabescu, and Anette Frank. Magma–multimodal Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu
augmentation of generative models through adapter-based Wei, et al. Oscar: Object-semantics aligned pre-training
finetuning. arXiv preprint arXiv:2112.05253, 2021. for vision-language tasks. In Computer Vision–ECCV 2020:
[13] Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang 16th European Conference, Glasgow, UK, August 23–28,
Wang, Lijuan Wang, and Zicheng Liu. Violet: End-to-end 2020, Proceedings, Part XXX 16, pages 121–137. Springer,
video-language transformers with masked visual-token mod- 2020.
eling. arXiv preprint arXiv:2111.12681, 2021. [26] Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli
[14] Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. Su-
and Jingjing Liu. Large-scale adversarial training for vision- pervision exists everywhere: A data efficient contrastive
language-image pre-training paradigm. In International learning with frozen language models. Advances in Neural
Conference on Learning Representations. Information Processing Systems, 34:200–212, 2021.
[27] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, [39] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
Zitnick. Microsoft coco: Common objects in context. In Polosukhin. Attention is all you need. Advances in neural
Computer Vision–ECCV 2014: 13th European Conference, information processing systems, 30, 2017.
Zurich, Switzerland, September 6-12, 2014, Proceedings, [40] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Du-
Part V 13, pages 740–755. Springer, 2014. mitru Erhan. Show and tell: A neural image caption gen-
[28] Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei erator. In Proceedings of the IEEE conference on computer
Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and vision and pattern recognition, pages 3156–3164, 2015.
Ashwin Kalyan. Learn to explain: Multimodal reasoning via [41] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed
thought chains for science question answering. In Advances Chi, and Denny Zhou. Self-consistency improves chain
in Neural Information Processing Systems, 2023. of thought reasoning in language models. arXiv preprint
[29] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L arXiv:2203.11171, 2022.
Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agar- [42] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
wal, Katarina Slama, Alex Ray, et al. Training language Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al.
models to follow instructions with human feedback. arXiv Chain-of-thought prompting elicits reasoning in large lan-
preprint arXiv:2203.02155, 2022. guage models. In Advances in Neural Information Process-
[30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya ing Systems, 2022.
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, [43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau-
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning mond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim
transferable visual models from natural language supervi- Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam
sion. In International conference on machine learning, pages Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien
8748–8763. PMLR, 2021. Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama
[31] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Drame, Quentin Lhoest, and Alexander M. Rush. Trans-
Amodei, Ilya Sutskever, et al. Language models are unsu- formers: State-of-the-art natural language processing. In
pervised multitask learners. Proceedings of the 2020 Conference on Empirical Methods
[32] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, in Natural Language Processing: System Demonstrations,
Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and pages 38–45, Online, Oct. 2020. Association for Computa-
Peter J Liu. Exploring the limits of transfer learning with tional Linguistics.
a unified text-to-text transformer. The Journal of Machine [44] Saining Xie and Zhuowen Tu. Holistically-nested edge de-
Learning Research, 21(1):5485–5551, 2020. tection. In Proceedings of the IEEE international conference
[33] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vi- on computer vision, pages 1395–1403, 2015.
sion transformers for dense prediction. In Proceedings of [45] Zhao Xu, Xu Baojie, and Wu Guoxin. Canny edge detection
the IEEE/CVF International Conference on Computer Vi- based on open cv. In 2017 13th IEEE international con-
sion, pages 12179–12188, 2021. ference on electronic measurement & instruments (ICEMI),
[34] René Ranftl, Katrin Lasinger, David Hafner, Konrad pages 53–56. IEEE, 2017.
Schindler, and Vladlen Koltun. Towards robust monocular [46] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yu-
depth estimation: Mixing datasets for zero-shot cross-dataset mao Lu, Zicheng Liu, and Lijuan Wang. An empirical study
transfer. IEEE transactions on pattern analysis and machine of gpt-3 for few-shot knowledge-based vqa. In Proceedings
intelligence, 44(3):1623–1637, 2020. of the AAAI Conference on Artificial Intelligence, volume 36,
[35] Robin Rombach, Andreas Blattmann, Dominik Lorenz, pages 3081–3089, 2022.
Patrick Esser, and Björn Ommer. High-resolution image [47] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman.
synthesis with latent diffusion models. In Proceedings of Star: Bootstrapping reasoning with reasoning. In Advances
the IEEE/CVF Conference on Computer Vision and Pattern in Neural Information Processing Systems.
Recognition, pages 10684–10695, 2022. [48] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi.
[36] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie From recognition to cognition: Visual commonsense rea-
Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, soning. In Proceedings of the IEEE/CVF conference on
Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, computer vision and pattern recognition, pages 6720–6731,
et al. Bloom: A 176b-parameter open-access multilingual 2019.
language model. arXiv preprint arXiv:2211.05100, 2022. [49] Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yan-
[37] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, peng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack
Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Hessel, Ali Farhadi, and Yejin Choi. Merlot reserve: Neu-
Paul F Christiano. Learning to summarize with human feed- ral script knowledge through vision and language and sound.
back. Advances in Neural Information Processing Systems, In Proceedings of the IEEE/CVF Conference on Computer
33:3008–3021, 2020. Vision and Pattern Recognition, pages 16375–16387, 2022.
[38] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Es- [50] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choro-
lami, Oriol Vinyals, and Felix Hill. Multimodal few-shot manski, Federico Tombari, Aveek Purohit, Michael Ryoo,
Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. So-
cratic models: Composing zero-shot multimodal reasoning
with language. arXiv preprint arXiv:2204.00598, 2022.
[51] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lu-
cas Beyer. Scaling vision transformers. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pages 12104–12113, 2022.
[52] Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner,
Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer.
Lit: Zero-shot transfer with locked-image text tuning. In
Proceedings of the IEEE/CVF Conference on Computer Vi-
sion and Pattern Recognition, pages 18123–18133, 2022.
[53] Lvmin Zhang and Maneesh Agrawala. Adding conditional
control to text-to-image diffusion models. arXiv preprint
arXiv:2302.05543, 2023.
[54] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang,
Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao.
Vinvl: Making visual representations matter in vision-
language models. arXiv preprint arXiv:2101.00529, 1(6):8,
2021.
[55] Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Weilong Yang,
Honglak Lee, and Irfan Essa. Text as neural operator: Image
manipulation by text instruction. In Proceedings of the 29th
ACM International Conference on Multimedia, pages 1893–
1902, 2021.
[56] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola.
Automatic chain of thought prompting in large language
models. arXiv preprint arXiv:2210.03493, 2022.
[57] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao,
George Karypis, and Alex Smola. Multimodal chain-of-
thought reasoning in language models. arXiv preprint
arXiv:2302.00923, 2023.
[58] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan
Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet,
Quoc Le, and Ed Chi. Least-to-most prompting enables com-
plex reasoning in large language models. arXiv preprint
arXiv:2205.10625, 2022.
A. Tool Details InOut: image path, question → answer
Prompt: useful when you need an answer for a question
based on an image like: what is the background
• Remove Something From The Photo: color of the last image, how many cats in this
figure, what is in this figure.
Model: “runwayml/stable-diffusion-inpainting” from
Huggingface library, StableDiffusionInpaint- • Get Photo Description:
Pipeline model; “CIDAS/clipseg-rd64-refined”
from Huggingface library, CLIPSegForImage- Model: ”Salesforce/blip-image-captioning-base” from
Segmentation model. HuggingFace library, BlipForConditionalGener-
ation model.
InOut: image path, textual what to remove → im-
age path InOut: image path → natural language description
Prompt: Remove something from the photo: useful for Prompt: Get photo description: useful for when you want
when you want to remove and object or some- to know what is inside the photo. The input to
thing from the photo from its description or loca- this tool should be a string, representing the im-
tion. The input to this tool should be a comma age path.
seperated string of two, representing the im-
• Generate Image From User Input Text:
age path and the object need to be removed.
Model: ”runwayml/stable-diffusion-v1-5” from Hug-
• Replace Something From The Photo:
gingFace library, StableDiffusionPipeline model.
Model: ”runwayml/stable-diffusion-inpainting” from InOut: textual description → image path
Huggingface library, StableDiffusionInpaint-
Pipeline model; “CIDAS/clipseg-rd64-refined” Prompt: Generate image from user input text: useful for
from Huggingface library, CLIPSegForImage- when you want to generate an image from a user
Segmentation model. input text and it saved it to a file. The input to
this tool should be a string, representing the text
InOut: image path, textual what to replace, textual what used to generate image.
to add → image path
Prompt: Replace something from the photo: useful for • Edge Detection On Image :
when you want to replace an object from the ob- Model: Canny Edge Detector from OpenCV
ject description or location with another object
from its description. The input to this tool should InOut: image path → edge image path
be a comma seperated string of three, represent- Prompt: Edge Detection On Image : useful for when you
ing the image path, the object to be replaced, the want to detect the edge of the image. like: detect
object to be replaced with. the edges of this image, or canny detection on
image, or peform edge detection on this image, or
• Instruct Image Using Text:
detect the canny image of this image. The input
Model: ”timbrooks/instruct-pix2pix” from Hugging- to this tool should be a string, representing the
Face, StableDiffusionInstructPix2PixPipeline image path.
model.
• Image Generation Condition On Canny Image:
InOut: image path, textual how to modify → im-
age path Model: ControlNet for Canny Edge.
Prompt: Instruct image using text: useful for when you InOut: edge image path, textual description → im-
want to the style of the image to be like the text. age path
like: make it look like a painting. or make it Prompt: useful for when you want to generate a new real
like a robot. The input to this tool should be a image from both the user desciption and a canny
comma seperated string of two, representing the image. like: generate a real image of a object
image path and the text. or something from this canny image, or generate
• Answer Question About The Image: a new real image of a object or something from
this edge image. The input to this tool should
Model: ”Salesforce/blip-vqa-base” from HuggingFace, be a comma seperated string of two, representing
BlipForQuestionAnswering model. the image path and the user description.
• Line Detection On Image : • Segmentation On Image :
Model: M-LSD Detector for Straight Line Model: Uniformer Segmentation
InOut: image path → line image path InOut: image path → segment image path
Prompt: Line Detection On Image : useful for when you Prompt: useful for when you want to detect segmentations
want to detect the straight line of the image. like: of the image. like: segment this image, or gener-
detect the straight lines of this image, or straight ate segmentations on this image, or peform seg-
line detection on image, or peform straight line mentation on this image. The input to this tool
detection on this image, or detect the straight line should be a string, representing the image path
image of this image. The input to this tool should
be a string, representing the image path • Generate Image Condition On Segmentations :
• Generate Image Condition On Line Image: Model: ControlNet for Segmentation.
Model: ControlNet for M-LSD Lines. InOut: segment image path, textual description → im-
age path
InOut: line image path, textual description → im-
age path Prompt: useful for when you want to generate a new real
Prompt: useful for when you want to generate a new image from both the user desciption and segmen-
real image from both the user desciption and a tations. like: generate a real image of a object or
straight line image. like: generate a real image something from this segmentation image, or gen-
of a object or something from this straight line erate a new real image of a object or something
image, or generate a new real image of a object from these segmentations. The input to this tool
or something from this straight lines. The input should be a comma seperated string of two, rep-
to this tool should be a comma seperated string resenting the image path and the user description
of two, representing the image path and the user • Predict Depth On Image :
description.
Model: MiDaS Depth Estimation
• Hed Detection On Image :
InOut: image path → depth image path
Model: HED Boundary Detector
Prompt: Predict Depth Map On Image : useful for when
InOut: image path → hed image path you want to detect depth of the image. like: gen-
Prompt: Hed Detection On Image: useful for when you erate the depth from this image, or detect the
want to detect the soft hed boundary of the image. depth map on this image, or predict the depth
like: detect the soft hed boundary of this image, for this image, the input to this tool should be
or hed boundary detection on image, or peform a string, representing the image path.
hed boundary detection on this image, or detect
soft hed boundary image of this image. The input • Generate Image Condition On Depth:
to this tool should be a string, representing the Model: ControlNet for Depth.
image path
InOut: depth image path, textual description → im-
• Generate Image Condition On Soft Hed Boundary Image : age path
Model: ControlNet for HED. Prompt: Generate Image Condition On Depth Map : use-
InOut: hed image path, textual description → im- ful for when you want to generate a new real im-
age path age from both the user desciption and depth im-
age. like: generate a real image of a object or
Prompt: Generate Image Condition On Soft Hed Bound- something from this depth image, or generate a
ary Image: useful for when you want to gener- new real image of a object or something from
ate a new real image from both the user descip- the depth map, The input to this tool should be a
tion and a soft hed boundary image. like: gen- comma seperated string of two, representing the
erate a real image of a object or something from image path and the user description.
this soft hed boundary image, or generate a new
real image of a object or something from this • Predict Normal Map On Image :
hed boundary. The input to this tool should be a
comma seperated string of two, representing the Model: MiDaS Depth Estimation for Normal Map
image path and the user description InOut: image path → norm image path
Prompt: Predict Normal Map On Image : useful for when this tool should be a string, representing the im-
you want to detect norm map of the image. like: age path
generate normal map from this image, or predict
normal map of this image The input to this tool • Generate Image Condition On Pose Image :
should be a string, representing the image path Model: ControlNet for Human Pose.
• Generate Image Condition On Normal Map : InOut: pos image path, textual description → im-
age path
Model: ControlNet for Normal Map.
Prompt: Generate Image Condition On Pose Image: use-
InOut: norm image path, textual description → im- ful for when you want to generate a new real im-
age path age from both the user desciption and a human
Prompt: Generate Image Condition On Normal Map : pose image. like: generate a real image of a hu-
useful for when you want to generate a new real man from this human pose image, or generate a
image from both the user desciption and normal new real image of a human from this pose. The
map. like: generate a real image of a object or input to this tool should be a comma seperated
something from this normal map, or generate a string of two, representing the image path and
new real image of a object or something from the the user description
normal map. The input to this tool should be a
comma seperated string of two, representing the
image path and the user description
• Sketch Detection On Image :
Model: HED Boundary Detector
InOut: image path → sketch image path
Prompt: Sketch Detection On Image: useful for when you
want to generate a scribble of the image. like:
generate a scribble of this image, or generate a
sketch from this image, detect the sketch from
this image. The input to this tool should be a
string, representing the image path
• Generate Image Condition On Sketch Image :
Model: ControlNet for Scribble.
InOut: sketch image path, textual description → im-
age path
Prompt: useful for when you want to generate a new real
image from both the user desciption and a scrib-
ble image. like: generate a real image of a object
or something from this scribble image, or gen-
erate a new real image of a object or something
from this sketch. The input to this tool should
be a comma seperated string of two, representing
the image path and the user description
• Pose Detection On Image :
Model: Openpose Detector
InOut: image path → pos image path
Prompt: Pose Detection On Image: useful for when you
want to detect the human pose of the image. like:
generate human poses of this image, or gener-
ate a pose image from this image. The input to

You might also like