text
stringlengths
0
2k
heading1
stringlengths
4
79
source_page_url
stringclasses
183 values
source_page_title
stringclasses
183 values
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️. Checkout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms). The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)
Conclusion
https://gradio.app/guides/running-background-tasks
Other Tutorials - Running Background Tasks Guide
Let's start by using `llama-index` on top of `openai` to build a RAG chatbot on any text or PDF files that you can demo and share in less than 30 lines of code. You'll need to have an OpenAI key for this example (keep reading for the free, open-source equivalent!) $code_llm_llamaindex
Llama Index
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
Here's an example using `langchain` on top of `openai` to build a general-purpose chatbot. As before, you'll need to have an OpenAI key for this example. $code_llm_langchain Tip: For quick prototyping, the community-maintained <a href='https://github.com/AK391/langchain-gradio'>langchain-gradio repo</a> makes it even easier to build chatbots on top of LangChain.
LangChain
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
Of course, we could also use the `openai` library directy. Here a similar example to the LangChain , but this time with streaming as well: Tip: For quick prototyping, the <a href='https://github.com/gradio-app/openai-gradio'>openai-gradio library</a> makes it even easier to build chatbots on top of OpenAI models.
OpenAI
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
Of course, in many cases you want to run a chatbot locally. Here's the equivalent example using the SmolLM2-135M-Instruct model using the Hugging Face `transformers` library. $code_llm_hf_transformers
Hugging Face `transformers`
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
The SambaNova Cloud API provides access to full-precision open-source models, such as the Llama family. Here's an example of how to build a Gradio app around the SambaNova API $code_llm_sambanova Tip: For quick prototyping, the <a href='https://github.com/gradio-app/sambanova-gradio'>sambanova-gradio library</a> makes it even easier to build chatbots on top of SambaNova models.
SambaNova
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
The Hyperbolic AI API provides access to many open-source models, such as the Llama family. Here's an example of how to build a Gradio app around the Hyperbolic $code_llm_hyperbolic Tip: For quick prototyping, the <a href='https://github.com/HyperbolicLabs/hyperbolic-gradio'>hyperbolic-gradio library</a> makes it even easier to build chatbots on top of Hyperbolic models.
Hyperbolic
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
Anthropic's Claude model can also be used via API. Here's a simple 20 questions-style game built on top of the Anthropic API: $code_llm_claude
Anthropic's Claude
https://gradio.app/guides/chatinterface-examples
Chatbots - Chatinterface Examples Guide
The Discord bot will listen to messages mentioning it in channels. When it receives a message (which can include text as well as files), it will send it to your Gradio app via Gradio's built-in API. Your bot will reply with the response it receives from the API. Because Gradio's API is very flexible, you can create Discord bots that support text, images, audio, streaming, chat history, and a wide variety of other features very easily. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-18%20at%204.26.55%E2%80%AFPM.gif)
How does it work?
https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app
Chatbots - Creating A Discord Bot From A Gradio App Guide
* Install the latest version of `gradio` and the `discord.py` libraries: ``` pip install --upgrade gradio discord.py~=2.0 ``` * Have a running Gradio app. This app can be running locally or on Hugging Face Spaces. In this example, we will be using the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which takes in an image and/or text and generates the code to generate the corresponding Gradio app. Now, we are ready to get started! 1. Create a Discord application First, go to the [Discord apps dashboard](https://discord.com/developers/applications). Look for the "New Application" button and click it. Give your application a name, and then click "Create". ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-4.png) On the resulting screen, you will see basic information about your application. Under the Settings section, click on the "Bot" option. You can update your bot's username if you would like. Then click on the "Reset Token" button. A new token will be generated. Copy it as we will need it for the next step. Scroll down to the section that says "Privileged Gateway Intents". Your bot will need certain permissions to work correctly. In this tutorial, we will only be using the "Message Content Intent" so click the toggle to enable this intent. Save the changes. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-3.png) 2. Write a Discord bot Let's start by writing a very simple Discord bot, just to make sure that everything is working. Write the following Python code in a file called `bot.py`, pasting the discord bot token from the previous step: ```python bot.py import discord TOKEN = PASTE YOUR DISCORD BOT TOKEN HERE client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(TOKEN) ``` Now, run this file: `python bot.py`, w
Prerequisites
https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app
Chatbots - Creating A Discord Bot From A Gradio App Guide
CORD BOT TOKEN HERE client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(TOKEN) ``` Now, run this file: `python bot.py`, which should run and print a message like: ```text We have logged in as GradioPlaygroundBot1451 ``` If that is working, we are ready to add Gradio-specific code. We will be using the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) to query the Gradio Playground Space mentioned above. Here's the updated `bot.py` file: ```python import discord from gradio_client import Client, handle_file import httpx import os TOKEN = PASTE YOUR DISCORD BOT TOKEN HERE intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) gradio_client = Client("abidlabs/gradio-playground-bot") def download_image(attachment): response = httpx.get(attachment.url) image_path = f"./images/{attachment.filename}" os.makedirs("./images", exist_ok=True) with open(image_path, "wb") as f: f.write(response.content) return image_path @client.event async def on_ready(): print(f'We have logged in as {client.user}') @client.event async def on_message(message): Ignore messages from the bot itself if message.author == client.user: return Check if the bot is mentioned in the message and reply if client.user in message.mentions: Extract the message content without the bot mention clean_message = message.content.replace(f"<@{client.user.id}>", "").strip() Handle images (only the first image is used) files = [] if message.attachments: for attachment in message.attachments: if any(attachment.filename.lower().endswith(ext) for ext in ['png', 'jpg', 'jpeg', 'gif', 'webp']): image_path = download_image(attachment) files.append(handle_file(image_path))
Prerequisites
https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app
Chatbots - Creating A Discord Bot From A Gradio App Guide
.filename.lower().endswith(ext) for ext in ['png', 'jpg', 'jpeg', 'gif', 'webp']): image_path = download_image(attachment) files.append(handle_file(image_path)) break Stream the responses to the channel for response in gradio_client.submit( message={"text": clean_message, "files": files}, ): await message.channel.send(response[-1]) client.run(TOKEN) ``` 3. Add the bot to your Discord Server Now we are ready to install the bot on our server. Go back to the [Discord apps dashboard](https://discord.com/developers/applications). Under the Settings section, click on the "OAuth2" option. Scroll down to the "OAuth2 URL Generator" box and select the "bot" checkbox: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-2.png) Then in "Bot Permissions" box that pops up underneath, enable the following permissions: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/discord-1.png) Copy the generated URL that appears underneath, which should look something like: ```text https://discord.com/oauth2/authorize?client_id=1319011745452265575&permissions=377957238784&integration_type=0&scope=bot ``` Paste it into your browser, which should allow you to add the Discord bot to any Discord server that you manage. 4. That's it! Now you can mention your bot from any channel in your Discord server, optionally attach an image, and it will respond with generated Gradio app code! The bot will: 1. Listen for mentions 2. Process any attached images 3. Send the text and images to your Gradio app 4. Stream the responses back to the Discord channel This is just a basic example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/
Prerequisites
https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app
Chatbots - Creating A Discord Bot From A Gradio App Guide
c example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-18%20at%204.26.55%E2%80%AFPM.gif) If you build a Discord bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!
Prerequisites
https://gradio.app/guides/creating-a-discord-bot-from-a-gradio-app
Chatbots - Creating A Discord Bot From A Gradio App Guide
First, we'll build the UI without handling these events and build from there. We'll use the Hugging Face InferenceClient in order to get started without setting up any API keys. This is what the first draft of our application looks like: ```python from huggingface_hub import InferenceClient import gradio as gr client = InferenceClient() def respond( prompt: str, history, ): if not history: history = [{"role": "system", "content": "You are a friendly chatbot"}] history.append({"role": "user", "content": prompt}) yield history response = {"role": "assistant", "content": ""} for message in client.chat_completion( type: ignore history, temperature=0.95, top_p=0.9, max_tokens=512, stream=True, model="openai/gpt-oss-20b" ): response["content"] += message.choices[0].delta.content or "" if message.choices else "" yield history + [response] with gr.Blocks() as demo: gr.Markdown("Chat with GPT-OSS 20b 🤗") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/376/hugging-face_1f917.png", ), ) prompt = gr.Textbox(max_lines=1, label="Chat Message") prompt.submit(respond, [prompt, chatbot], [chatbot]) prompt.submit(lambda: "", None, [prompt]) if __name__ == "__main__": demo.launch() ```
The UI
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
Our undo event will populate the textbox with the previous user message and also remove all subsequent assistant responses. In order to know the index of the last user message, we can pass `gr.UndoData` to our event handler function like so: ```python def handle_undo(history, undo_data: gr.UndoData): return history[:undo_data.index], history[undo_data.index]['content'][0]["text"] ``` We then pass this function to the `undo` event! ```python chatbot.undo(handle_undo, chatbot, [chatbot, prompt]) ``` You'll notice that every bot response will now have an "undo icon" you can use to undo the response - ![undo_event](https://github.com/user-attachments/assets/180b5302-bc4a-4c3e-903c-f14ec2adcaa6) Tip: You can also access the content of the user message with `undo_data.value`
The Undo Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
The retry event will work similarly. We'll use `gr.RetryData` to get the index of the previous user message and remove all the subsequent messages from the history. Then we'll use the `respond` function to generate a new response. We could also get the previous prompt via the `value` property of `gr.RetryData`. ```python def handle_retry(history, retry_data: gr.RetryData): new_history = history[:retry_data.index] previous_prompt = history[retry_data.index]['content'][0]["text"] yield from respond(previous_prompt, new_history) ... chatbot.retry(handle_retry, chatbot, chatbot) ``` You'll see that the bot messages have a "retry" icon now - ![retry_event](https://github.com/user-attachments/assets/cec386a7-c4cd-4fb3-a2d7-78fd806ceac6) Tip: The Hugging Face inference API caches responses, so in this demo, the retry button will not generate a new response.
The Retry Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
By now you should hopefully be seeing the pattern! To let users like a message, we'll add a `.like` event to our chatbot. We'll pass it a function that accepts a `gr.LikeData` object. In this case, we'll just print the message that was either liked or disliked. ```python def handle_like(data: gr.LikeData): if data.liked: print("You upvoted this response: ", data.value) else: print("You downvoted this response: ", data.value) chatbot.like(handle_like, None, None) ```
The Like Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
Same idea with the edit listener! with `gr.Chatbot(editable=True)`, you can capture user edits. The `gr.EditData` object tells us the index of the message edited and the new text of the mssage. Below, we use this object to edit the history, and delete any subsequent messages. ```python def handle_edit(history, edit_data: gr.EditData): new_history = history[:edit_data.index] new_history[-1]['content'] = [{"text": edit_data.value, "type": "text"}] return new_history ... chatbot.edit(handle_edit, chatbot, chatbot) ```
The Edit Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
As a bonus, we'll also cover the `.clear()` event, which is triggered when the user clicks the clear icon to clear all messages. As a developer, you can attach additional events that should happen when this icon is clicked, e.g. to handle clearing of additional chatbot state: ```python from uuid import uuid4 import gradio as gr def clear(): print("Cleared uuid") return uuid4() def chat_fn(user_input, history, uuid): return f"{user_input} with uuid {uuid}" with gr.Blocks() as demo: uuid_state = gr.State( uuid4 ) chatbot = gr.Chatbot() chatbot.clear(clear, outputs=[uuid_state]) gr.ChatInterface( chat_fn, additional_inputs=[uuid_state], chatbot=chatbot, ) demo.launch() ``` In this example, the `clear` function, bound to the `chatbot.clear` event, returns a new UUID into our session state, when the chat history is cleared via the trash icon. This can be seen in the `chat_fn` function, which references the UUID saved in our session state. This example also shows that you can use these events with `gr.ChatInterface` by passing in a custom `gr.Chatbot` object.
The Clear Event
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
That's it! You now know how you can implement the retry, undo, like, and clear events for the Chatbot.
Conclusion
https://gradio.app/guides/chatbot-specific-events
Chatbots - Chatbot Specific Events Guide
The Slack bot will listen to messages mentioning it in channels. When it receives a message (which can include text as well as files), it will send it to your Gradio app via Gradio's built-in API. Your bot will reply with the response it receives from the API. Because Gradio's API is very flexible, you can create Slack bots that support text, images, audio, streaming, chat history, and a wide variety of other features very easily. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif)
How does it work?
https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app
Chatbots - Creating A Slack Bot From A Gradio App Guide
* Install the latest version of `gradio` and the `slack-bolt` library: ```bash pip install --upgrade gradio slack-bolt~=1.0 ``` * Have a running Gradio app. This app can be running locally or on Hugging Face Spaces. In this example, we will be using the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which takes in an image and/or text and generates the code to generate the corresponding Gradio app. Now, we are ready to get started! 1. Create a Slack App 1. Go to [api.slack.com/apps](https://api.slack.com/apps) and click "Create New App" 2. Choose "From scratch" and give your app a name 3. Select the workspace where you want to develop your app 4. Under "OAuth & Permissions", scroll to "Scopes" and add these Bot Token Scopes: - `app_mentions:read` - `chat:write` - `files:read` - `files:write` 5. In the same "OAuth & Permissions" page, scroll back up and click the button to install the app to your workspace. 6. Note the "Bot User OAuth Token" (starts with `xoxb-`) that appears as we'll need it later 7. Click on "Socket Mode" in the menu bar. When the page loads, click the toggle to "Enable Socket Mode" 8. Give your token a name, such as `socket-token` and copy the token that is generated (starts with `xapp-`) as we'll need it later. 9. Finally, go to the "Event Subscription" option in the menu bar. Click the toggle to "Enable Events" and subscribe to the `app_mention` bot event. 2. Write a Slack bot Let's start by writing a very simple Slack bot, just to make sure that everything is working. Write the following Python code in a file called `bot.py`, pasting the two tokens from step 6 and step 8 in the previous section. ```py from slack_bolt import App from slack_bolt.adapter.socket_mode import SocketModeHandler SLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE SLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE app = App(token=SLACK_BOT_TOKEN) @app.event("app_mention") def handle_app_mention_ev
Prerequisites
https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app
Chatbots - Creating A Slack Bot From A Gradio App Guide
eHandler SLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE SLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE app = App(token=SLACK_BOT_TOKEN) @app.event("app_mention") def handle_app_mention_events(body, say): user_id = body["event"]["user"] say(f"Hi <@{user_id}>! You mentioned me and said: {body['event']['text']}") if __name__ == "__main__": handler = SocketModeHandler(app, SLACK_APP_TOKEN) handler.start() ``` If that is working, we are ready to add Gradio-specific code. We will be using the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) to query the Gradio Playground Space mentioned above. Here's the updated `bot.py` file: ```python from slack_bolt import App from slack_bolt.adapter.socket_mode import SocketModeHandler SLACK_BOT_TOKEN = PASTE YOUR SLACK BOT TOKEN HERE SLACK_APP_TOKEN = PASTE YOUR SLACK APP TOKEN HERE app = App(token=SLACK_BOT_TOKEN) gradio_client = Client("abidlabs/gradio-playground-bot") def download_image(url, filename): headers = {"Authorization": f"Bearer {SLACK_BOT_TOKEN}"} response = httpx.get(url, headers=headers) image_path = f"./images/{filename}" os.makedirs("./images", exist_ok=True) with open(image_path, "wb") as f: f.write(response.content) return image_path def slackify_message(message): Replace markdown links with slack format and remove code language specifier after triple backticks pattern = r'\[(.*?)\]\((.*?)\)' cleaned = re.sub(pattern, r'<\2|\1>', message) cleaned = re.sub(r'```\w+\n', '```', cleaned) return cleaned.strip() @app.event("app_mention") def handle_app_mention_events(body, say): Extract the message content without the bot mention text = body["event"]["text"] bot_user_id = body["authorizations"][0]["user_id"] clean_message = text.replace(f"<@{bot_user_id}>", "").strip() Handle images if present files = [] if "files" in body["event"]: for
Prerequisites
https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app
Chatbots - Creating A Slack Bot From A Gradio App Guide
= body["authorizations"][0]["user_id"] clean_message = text.replace(f"<@{bot_user_id}>", "").strip() Handle images if present files = [] if "files" in body["event"]: for file in body["event"]["files"]: if file["filetype"] in ["png", "jpg", "jpeg", "gif", "webp"]: image_path = download_image(file["url_private_download"], file["name"]) files.append(handle_file(image_path)) break Submit to Gradio and send responses back to Slack for response in gradio_client.submit( message={"text": clean_message, "files": files}, ): cleaned_response = slackify_message(response[-1]) say(cleaned_response) if __name__ == "__main__": handler = SocketModeHandler(app, SLACK_APP_TOKEN) handler.start() ``` 3. Add the bot to your Slack Workplace Now, create a new channel or navigate to an existing channel in your Slack workspace where you want to use the bot. Click the "+" button next to "Channels" in your Slack sidebar and follow the prompts to create a new channel. Finally, invite your bot to the channel: 1. In your new channel, type `/invite @YourBotName` 2. Select your bot from the dropdown 3. Click "Invite to Channel" 4. That's it! Now you can mention your bot in any channel it's in, optionally attach an image, and it will respond with generated Gradio app code! The bot will: 1. Listen for mentions 2. Process any attached images 3. Send the text and images to your Gradio app 4. Stream the responses back to the Slack channel This is just a basic example - you can extend it to handle more types of files, add error handling, or integrate with different Gradio apps! ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif) If you build a Slack bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gr
Prerequisites
https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app
Chatbots - Creating A Slack Bot From A Gradio App Guide
/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.30.00%E2%80%AFPM.gif) If you build a Slack bot from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!
Prerequisites
https://gradio.app/guides/creating-a-slack-bot-from-a-gradio-app
Chatbots - Creating A Slack Bot From A Gradio App Guide
**Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast). This tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this: $demo_chatbot_streaming **Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo. You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
Introduction
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
Let's start with recreating the simple demo above. As you may have noticed, our bot simply randomly responds "How are you?", "Today is a great day", or "I'm very hungry" to any input. Here's the code to create this with Gradio: $code_chatbot_simple There are three Gradio components here: - A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot. - A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response - A `ClearButton` button to clear the Textbox and entire Chatbot history We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns. Of course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response. $demo_chatbot_simple Tip: For better type hinting and auto-completion in your IDE, you can use the `gr.ChatMessage` dataclass: ```python from gradio import ChatMessage def chat_function(message, history): history.append(ChatMessage(role="user", content=message)) history.append(ChatMessage(role="assistant", content="Hello, how can I help you?")) return history ```
A Simple Chatbot Demo
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that: $code_chatbot_streaming You'll notice that when a user submits their message, we now _chain_ two event events with `.then()`: 1. The first method `user()` updates the chatbot with the user message and clears the input field. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `{"role": "user", "content": user_message}`. 2. The second method, `bot()` updates the chatbot history with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/iterative-outputs). Of course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
Add Streaming to your Chatbot
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
The `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this: ```py def bot(history): response = {"role": "assistant", "content": "**That's cool!**"} history.append(response) return history ``` In addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. To pass in a media file, we must pass in the file a dictionary with a `path` key pointing to a local file and an `alt_text` key. The `alt_text` is optional, so you can also just pass in a tuple with a single element `{"path": "filepath"}`, like this: ```python def add_message(history, message): for x in message["files"]: history.append({"role": "user", "content": {"path": x}}) if message["text"] is not None: history.append({"role": "user", "content": message["text"]}) return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=["image"], sources=["upload", "microphone"]) ``` Putting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before: $code_chatbot_multimodal $demo_chatbot_multimodal And you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible: - [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses. - [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Ow
Adding Markdown, Images, Audio, or Videos
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
ingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses. - [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.
Adding Markdown, Images, Audio, or Videos
https://gradio.app/guides/creating-a-custom-chatbot-with-blocks
Chatbots - Creating A Custom Chatbot With Blocks Guide
Every element of the chatbot value is a dictionary of `role` and `content` keys. You can always use plain python dictionaries to add new values to the chatbot but Gradio also provides the `ChatMessage` dataclass to help you with IDE autocompletion. The schema of `ChatMessage` is as follows: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: MessageContent | [MessageContent] role: Literal["user", "assistant"] metadata: MetadataDict = None options: list[OptionDict] = None class MetadataDict(TypedDict): title: NotRequired[str] id: NotRequired[int | str] parent_id: NotRequired[int | str] log: NotRequired[str] duration: NotRequired[float] status: NotRequired[Literal["pending", "done"]] class OptionDict(TypedDict): label: NotRequired[str] value: str ``` For our purposes, the most important key is the `metadata` key, which accepts a dictionary. If this dictionary includes a `title` for the message, it will be displayed in a collapsible accordion representing a thought. It's that simple! Take a look at this example: ```python import gradio as gr with gr.Blocks() as demo: chatbot = gr.Chatbot( value=[ gr.ChatMessage( role="user", content="What is the weather in San Francisco?" ), gr.ChatMessage( role="assistant", content="I need to use the weather API tool?", metadata={"title": "🧠 Thinking"} ) ] ) demo.launch() ``` In addition to `title`, the dictionary provided to `metadata` can take several optional keys: * `log`: an optional string value to be displayed in a subdued font next to the thought title. * `duration`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title. * `status`: if set to `
The `ChatMessage` dataclass
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
tion`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title. * `status`: if set to `"pending"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `"done"`, the thought accordion is initialized closed. If `status` is not provided, the thought accordion is initialized open and no spinner is displayed. * `id` and `parent_id`: if these are provided, they can be used to nest thoughts inside other thoughts. Below, we show several complete examples of using `gr.Chatbot` and `gr.ChatInterface` to display tool use or thinking UIs.
The `ChatMessage` dataclass
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
A real example using transformers.agents We'll create a Gradio application simple agent that has access to a text-to-image tool. Tip: Make sure you read the [smolagents documentation](https://huggingface.co/docs/smolagents/index) first We'll start by importing the necessary classes from transformers and gradio. ```python import gradio as gr from gradio import ChatMessage from transformers import Tool, ReactCodeAgent type: ignore from transformers.agents import stream_to_gradio, HfApiEngine type: ignore Import tool from Hub image_generation_tool = Tool.from_space( space_id="black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generates an image following your prompt. Returns a PIL Image.", api_name="/infer", ) llm_engine = HfApiEngine("Qwen/Qwen2.5-Coder-32B-Instruct") Initialize the agent with both tools and engine agent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine) ``` Then we'll build the UI: ```python def interact_with_agent(prompt, history): messages = [] yield messages for msg in stream_to_gradio(agent, prompt): messages.append(asdict(msg)) yield messages yield messages demo = gr.ChatInterface( interact_with_agent, chatbot= gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png", ), ), examples=[ ["Generate an image of an astronaut riding an alligator"], ["I am writing a children's book for my daughter. Can you help me with some illustrations?"], ], ) ``` You can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py). ![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d) A real example using langchain agents We'll create a UI for langchain agent that has access to a search eng
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
om/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d) A real example using langchain agents We'll create a UI for langchain agent that has access to a search engine. We'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with the following environment variables set - ``` SERPAPI_API_KEY= HF_TOKEN= OPENAI_API_KEY= ``` ```python from langchain import hub from langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools from langchain_openai import ChatOpenAI from gradio import ChatMessage import gradio as gr from dotenv import load_dotenv load_dotenv() model = ChatOpenAI(temperature=0, streaming=True) tools = load_tools(["serpapi"]) Get the prompt to use - you can modify this! prompt = hub.pull("hwchase17/openai-tools-agent") agent = create_openai_tools_agent( model.with_config({"tags": ["agent_llm"]}), tools, prompt ) agent_executor = AgentExecutor(agent=agent, tools=tools).with_config( {"run_name": "Agent"} ) ``` Then we'll create the Gradio UI ```python async def interact_with_langchain_agent(prompt, messages): messages.append(ChatMessage(role="user", content=prompt)) yield messages async for chunk in agent_executor.astream( {"input": prompt} ): if "steps" in chunk: for step in chunk["steps"]: messages.append(ChatMessage(role="assistant", content=step.action.log, metadata={"title": f"🛠️ Used tool {step.action.tool}"})) yield messages if "output" in chunk: messages.append(ChatMessage(role="assistant", content=chunk["output"])) yield messages with gr.Blocks() as demo: gr.Markdown("Chat with a LangChain Agent 🦜⛓️ and see its thoughts 💭") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png",
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
🦜⛓️ and see its thoughts 💭") chatbot = gr.Chatbot( label="Agent", avatar_images=( None, "https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png", ), ) input = gr.Textbox(lines=1, label="Chat Message") input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2]) demo.launch() ``` ![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67) That's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).
Building with Agents
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
The Gradio Chatbot can natively display intermediate thoughts of a _thinking_ LLM. This makes it perfect for creating UIs that show how an AI model "thinks" while generating responses. Below guide will show you how to build a chatbot that displays Gemini AI's thought process in real-time. A real example using Gemini 2.0 Flash Thinking API Let's create a complete chatbot that shows its thoughts and responses in real-time. We'll use Google's Gemini API for accessing Gemini 2.0 Flash Thinking LLM and Gradio for the UI. We'll begin with imports and setting up the gemini client. Note that you'll need to [acquire a Google Gemini API key](https://aistudio.google.com/apikey) first - ```python import gradio as gr from gradio import ChatMessage from typing import Iterator import google.generativeai as genai genai.configure(api_key="your-gemini-api-key") model = genai.GenerativeModel("gemini-2.0-flash-thinking-exp-1219") ``` First, let's set up our streaming function that handles the model's output: ```python def stream_gemini_response(user_message: str, messages: list) -> Iterator[list]: """ Streams both thoughts and responses from the Gemini model. """ Initialize response from Gemini response = model.generate_content(user_message, stream=True) Initialize buffers thought_buffer = "" response_buffer = "" thinking_complete = False Add initial thinking message messages.append( ChatMessage( role="assistant", content="", metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) ) for chunk in response: parts = chunk.candidates[0].content.parts current_chunk = parts[0].text if len(parts) == 2 and not thinking_complete: Complete thought and start response thought_buffer += current_chunk messages[-1] = ChatMessage( rol
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
if len(parts) == 2 and not thinking_complete: Complete thought and start response thought_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=thought_buffer, metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) Add response message messages.append( ChatMessage( role="assistant", content=parts[1].text ) ) thinking_complete = True elif thinking_complete: Continue streaming response response_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=response_buffer ) else: Continue streaming thoughts thought_buffer += current_chunk messages[-1] = ChatMessage( role="assistant", content=thought_buffer, metadata={"title": "⏳Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental"} ) yield messages ``` Then, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Gemini 2.0 Flash and See its Thoughts 💭") chatbot = gr.Chatbot( label="Gemini2.0 'Thinking' Chatbot", render_markdown=True, ) input_box = gr.Textbox( lines=1, label="Chat Message", placeholder="Type your message here and press Enter..." ) Set up event handlers msg_store = gr.State("") Store for preserving user message input_box.submit( lambda msg: (msg, msg, ""), Store message and clear input inputs=[input_box], outputs=[msg_store, input_box, input_box], queue=Fa
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
message input_box.submit( lambda msg: (msg, msg, ""), Store message and clear input inputs=[input_box], outputs=[msg_store, input_box, input_box], queue=False ).then( user_message, Add user message to chat inputs=[msg_store, chatbot], outputs=[input_box, chatbot], queue=False ).then( stream_gemini_response, Generate and stream response inputs=[msg_store, chatbot], outputs=chatbot ) demo.launch() ``` This creates a chatbot that: - Displays the model's thoughts in a collapsible section - Streams the thoughts and final response in real-time - Maintains a clean chat history That's it! You now have a chatbot that not only responds to users but also shows its thinking process, creating a more transparent and engaging interaction. See our finished Gemini 2.0 Flash Thinking demo [here](https://huggingface.co/spaces/ysharma/Gemini2-Flash-Thinking). Building with Citations The Gradio Chatbot can display citations from LLM responses, making it perfect for creating UIs that show source documentation and references. This guide will show you how to build a chatbot that displays Claude's citations in real-time. A real example using Anthropic's Citations API Let's create a complete chatbot that shows both responses and their supporting citations. We'll use Anthropic's Claude API with citations enabled and Gradio for the UI. We'll begin with imports and setting up the Anthropic client. Note that you'll need an `ANTHROPIC_API_KEY` environment variable set: ```python import gradio as gr import anthropic import base64 from typing import List, Dict, Any client = anthropic.Anthropic() ``` First, let's set up our message formatting functions that handle document preparation: ```python def encode_pdf_to_base64(file_obj) -> str: """Convert uploaded PDF file to base64 string.""" if file_obj is None: return None with open(file_obj.na
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
document preparation: ```python def encode_pdf_to_base64(file_obj) -> str: """Convert uploaded PDF file to base64 string.""" if file_obj is None: return None with open(file_obj.name, 'rb') as f: return base64.b64encode(f.read()).decode('utf-8') def format_message_history( history: list, enable_citations: bool, doc_type: str, text_input: str, pdf_file: str ) -> List[Dict]: """Convert Gradio chat history to Anthropic message format.""" formatted_messages = [] Add previous messages for msg in history[:-1]: if msg["role"] == "user": formatted_messages.append({"role": "user", "content": msg["content"]}) Prepare the latest message with document latest_message = {"role": "user", "content": []} if enable_citations: if doc_type == "plain_text": latest_message["content"].append({ "type": "document", "source": { "type": "text", "media_type": "text/plain", "data": text_input.strip() }, "title": "Text Document", "citations": {"enabled": True} }) elif doc_type == "pdf" and pdf_file: pdf_data = encode_pdf_to_base64(pdf_file) if pdf_data: latest_message["content"].append({ "type": "document", "source": { "type": "base64", "media_type": "application/pdf", "data": pdf_data }, "title": pdf_file.name, "citations": {"enabled": True} }) Add the user's question latest_message["content"].append({"type": "text", "text": history[-1]["content"]}) formatted_messages.append(latest_message) return formatted_messages ``` Then, let's create our bot resp
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
latest_message["content"].append({"type": "text", "text": history[-1]["content"]}) formatted_messages.append(latest_message) return formatted_messages ``` Then, let's create our bot response handler that processes citations: ```python def bot_response( history: list, enable_citations: bool, doc_type: str, text_input: str, pdf_file: str ) -> List[Dict[str, Any]]: try: messages = format_message_history(history, enable_citations, doc_type, text_input, pdf_file) response = client.messages.create(model="claude-3-5-sonnet-20241022", max_tokens=1024, messages=messages) Initialize main response and citations main_response = "" citations = [] Process each content block for block in response.content: if block.type == "text": main_response += block.text if enable_citations and hasattr(block, 'citations') and block.citations: for citation in block.citations: if citation.cited_text not in citations: citations.append(citation.cited_text) Add main response history.append({"role": "assistant", "content": main_response}) Add citations in a collapsible section if enable_citations and citations: history.append({ "role": "assistant", "content": "\n".join([f"• {cite}" for cite in citations]), "metadata": {"title": "📚 Citations"} }) return history except Exception as e: history.append({ "role": "assistant", "content": "I apologize, but I encountered an error while processing your request." }) return history ``` Finally, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Citations") with gr.Row(sc
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
your request." }) return history ``` Finally, let's create the Gradio interface: ```python with gr.Blocks() as demo: gr.Markdown("Chat with Citations") with gr.Row(scale=1): with gr.Column(scale=4): chatbot = gr.Chatbot(bubble_full_width=False, show_label=False, scale=1) msg = gr.Textbox(placeholder="Enter your message here...", show_label=False, container=False) with gr.Column(scale=1): enable_citations = gr.Checkbox(label="Enable Citations", value=True, info="Toggle citation functionality" ) doc_type_radio = gr.Radio( choices=["plain_text", "pdf"], value="plain_text", label="Document Type", info="Choose the type of document to use") text_input = gr.Textbox(label="Document Content", lines=10, info="Enter the text you want to reference") pdf_input = gr.File(label="Upload PDF", file_types=[".pdf"], file_count="single", visible=False) Handle message submission msg.submit( user_message, [msg, chatbot, enable_citations, doc_type_radio, text_input, pdf_input], [msg, chatbot] ).then( bot_response, [chatbot, enable_citations, doc_type_radio, text_input, pdf_input], chatbot ) demo.launch() ``` This creates a chatbot that: - Supports both plain text and PDF documents for Claude to cite from - Displays Citations in collapsible sections using our `metadata` feature - Shows source quotes directly from the given documents The citations feature works particularly well with the Gradio Chatbot's `metadata` support, allowing us to create collapsible sections that keep the chat interface clean while still providing easy access to source documentation. That's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/a
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
tbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/anthropic-citations-with-gradio-metadata-key).
Building with Visibly Thinking LLMs
https://gradio.app/guides/agents-and-tool-usage
Chatbots - Agents And Tool Usage Guide
The chat widget appears as a small button in the corner of your website. When clicked, it opens a chat interface that communicates with your Gradio app via the JavaScript Client API. Users can ask questions and receive responses directly within the widget.
How does it work?
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
* A running Gradio app (local or on Hugging Face Spaces). In this example, we'll use the [Gradio Playground Space](https://huggingface.co/spaces/abidlabs/gradio-playground-bot), which helps generate code for Gradio apps based on natural language descriptions. 1. Create and Style the Chat Widget First, add this HTML and CSS to your website: ```html <div id="chat-widget" class="chat-widget"> <button id="chat-toggle" class="chat-toggle">💬</button> <div id="chat-container" class="chat-container hidden"> <div id="chat-header"> <h3>Gradio Assistant</h3> <button id="close-chat">×</button> </div> <div id="chat-messages"></div> <div id="chat-input-area"> <input type="text" id="chat-input" placeholder="Ask a question..."> <button id="send-message">Send</button> </div> </div> </div> <style> .chat-widget { position: fixed; bottom: 20px; right: 20px; z-index: 1000; } .chat-toggle { width: 50px; height: 50px; border-radius: 50%; background: 007bff; border: none; color: white; font-size: 24px; cursor: pointer; } .chat-container { position: fixed; bottom: 80px; right: 20px; width: 300px; height: 400px; background: white; border-radius: 10px; box-shadow: 0 0 10px rgba(0,0,0,0.1); display: flex; flex-direction: column; } .chat-container.hidden { display: none; } chat-header { padding: 10px; background: 007bff; color: white; border-radius: 10px 10px 0 0; display: flex; justify-content: space-between; align-items: center; } chat-messages { flex-grow: 1; overflow-y: auto; padding: 10px; } chat-input-area { padding: 10px; border-top: 1px solid eee; display: flex; } chat-input { flex-grow: 1; padding: 8px; border: 1px solid ddd; border-radius: 4px; margin-right: 8px; } .message { margin: 8px 0; pad
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
solid eee; display: flex; } chat-input { flex-grow: 1; padding: 8px; border: 1px solid ddd; border-radius: 4px; margin-right: 8px; } .message { margin: 8px 0; padding: 8px; border-radius: 4px; } .user-message { background: e9ecef; margin-left: 20px; } .bot-message { background: f8f9fa; margin-right: 20px; } </style> ``` 2. Add the JavaScript Then, add the following JavaScript code (which uses the Gradio JavaScript Client to connect to the Space) to your website by including this in the `<head>` section of your website: ```html <script type="module"> import { Client } from "https://cdn.jsdelivr.net/npm/@gradio/client/dist/index.min.js"; async function initChatWidget() { const client = await Client.connect("https://abidlabs-gradio-playground-bot.hf.space"); const chatToggle = document.getElementById('chat-toggle'); const chatContainer = document.getElementById('chat-container'); const closeChat = document.getElementById('close-chat'); const chatInput = document.getElementById('chat-input'); const sendButton = document.getElementById('send-message'); const messagesContainer = document.getElementById('chat-messages'); chatToggle.addEventListener('click', () => { chatContainer.classList.remove('hidden'); }); closeChat.addEventListener('click', () => { chatContainer.classList.add('hidden'); }); async function sendMessage() { const userMessage = chatInput.value.trim(); if (!userMessage) return; appendMessage(userMessage, 'user'); chatInput.value = ''; try { const result = await client.predict("/chat", { message: {"text": userMessage, "files": []} }); const message = result.data[0]; console.log(result.data[0]
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
client.predict("/chat", { message: {"text": userMessage, "files": []} }); const message = result.data[0]; console.log(result.data[0]); const botMessage = result.data[0].join('\n'); appendMessage(botMessage, 'bot'); } catch (error) { console.error('Error:', error); appendMessage('Sorry, there was an error processing your request.', 'bot'); } } function appendMessage(text, sender) { const messageDiv = document.createElement('div'); messageDiv.className = `message ${sender}-message`; if (sender === 'bot') { messageDiv.innerHTML = marked.parse(text); } else { messageDiv.textContent = text; } messagesContainer.appendChild(messageDiv); messagesContainer.scrollTop = messagesContainer.scrollHeight; } sendButton.addEventListener('click', sendMessage); chatInput.addEventListener('keypress', (e) => { if (e.key === 'Enter') sendMessage(); }); } initChatWidget(); </script> ``` 3. That's it! Your website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app. Customization You can customize the appearance of the widget by modifying the CSS. Some ideas: - Change the colors to match your website's theme - Adjust the size and position of the widget - Add animations for opening/closing - Modify the message styling ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif) If you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif) If you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!
Prerequisites
https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot
Chatbots - Creating A Website Widget From A Gradio Chatbot Guide
Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI. This tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization. **Prerequisites**: please make sure you are using the latest version of Gradio: ```bash $ pip install --upgrade gradio ```
Introduction
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token: ```python import gradio as gr gr.load_chat("http://localhost:11434/v1/", model="llama3.2", token="***").launch() ``` Read about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!
Note for OpenAI-API compatible endpoints
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order). - `message`: a `str` representing the user's most recent message. - `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata. The `history` would look like this: ```python [ {"role": "user", "content": [{"type": "text", "text": "What is the capital of France?"}]}, {"role": "assistant", "content": [{"type": "text", "text": "Paris"}]} ] ``` while the next `message` would be: ```py "And what is its largest city?" ``` Your chat function simply needs to return: * a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case: ``` Paris is also the largest city. ``` Let's take a look at a few example chat functions: **Example: a chatbot that randomly responds with yes or no** Let's write a chat function that responds `Yes` or `No` randomly. Here's our chat function: ```python import random def random_response(message, history): return random.choice(["Yes", "No"]) ``` Now, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface: ```python import gradio as gr gr.ChatInterface( fn=random_response, ).launch() ``` That's it! Here's our running demo, try it out: $demo_chatinterface_random_response **Example: a chatbot that alternates between agreeing and disagreeing** Of course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history. ```python import gradio as gr def alternatingl
Defining a chat function
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
t take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history. ```python import gradio as gr def alternatingly_agree(message, history): if len([h for h in history if h['role'] == "assistant"]) % 2 == 0: return f"Yes, I do think that: {message}" else: return "I don't think so" gr.ChatInterface( fn=alternatingly_agree, ).launch() ``` We'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples).
Defining a chat function
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple! ```python import time import gradio as gr def slow_echo(message, history): for i in range(len(message)): time.sleep(0.3) yield "You typed: " + message[: i+1] gr.ChatInterface( fn=slow_echo, ).launch() ``` While the response is streaming, the "Submit" button turns into a "Stop" button that can be used to stop the generator function. Tip: Even though you are yielding the latest message at each iteration, Gradio only sends the "diff" of each message from the server to the frontend, which reduces latency and data consumption over your network.
Streaming chatbots
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can: - add a title and description above your chatbot using `title` and `description` arguments. - add a theme or custom css using `theme` and `css` arguments respectively in the `launch()` method. - add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out. - customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder). **Adding examples** You can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as "buttons" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{"text": "What's in this image?", "files": ["cheetah.jpg"]}`. Each file will be a separate message that is added to your Chatbot history. You can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list. If you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`. **Customizing the chatbot or textbox component** If you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section: ```python import gradio as gr def yes_man(message, history): if message.endswith("?"): return "Yes" else:
Customizing the Chat UI
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
le of how we to apply the parameters we've discussed in this section: ```python import gradio as gr def yes_man(message, history): if message.endswith("?"): return "Yes" else: return "Ask me anything!" gr.ChatInterface( yes_man, chatbot=gr.Chatbot(height=300), textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7), title="Yes Man", description="Ask Yes Man any question", examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"], cache_examples=True, ).launch(theme="ocean") ``` Here's another example that adds a "placeholder" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML: ```python gr.ChatInterface( yes_man, chatbot=gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything"), ... ``` The placeholder appears vertically and horizontally centered in the chatbot.
Customizing the Chat UI
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot "multimodal" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class. When `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: ```py { "text": "user input", "files": [ "updated_file_1_path.ext", "updated_file_2_path.ext", ... ] } ``` This second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key will be a dictionary with a "type" key whose value is "file" and the file will be represented as a dictionary. All the files will be grouped in message in the history. So after uploading two files and asking a question, your history might look like this: ```python [ {"role": "user", "content": [{"type": "file", "file": {"path": "cat1.png"}}, {"type": "file", "file": {"path": "cat1.png"}}, {"type": "text", "text": "What's the difference between these two images?"}]} ] ``` The return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses). If you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to
Multimodal Chat Interface
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
ox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface: ```python import gradio as gr def count_images(message, history): num_images = len(message["files"]) total_images = 0 for message in history: for content in message["content"]: if content["type"] == "file": total_images += 1 return f"You just uploaded {num_images} images, total uploaded: {total_images+num_images}" demo = gr.ChatInterface( fn=count_images, examples=[ {"text": "No files", "files": []} ], multimodal=True, textbox=gr.MultimodalTextbox(file_count="multiple", file_types=["image"], sources=["upload", "microphone"]) ) demo.launch() ```
Multimodal Chat Interface
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components. The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. Here's a complete example: $code_chatinterface_system_prompt If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath. ```python import gradio as gr import time def echo(message, history, system_prompt, tokens): response = f"System prompt: {system_prompt}\n Message: {message}." for i in range(min(len(response), int(tokens))): time.sleep(0.05) yield response[: i+1] with gr.Blocks() as demo: system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt") slider = gr.Slider(10, 100, render=False) gr.ChatInterface( echo, additional_inputs=[system_prompt, slider], ) demo.launch() ``` **Examples with additional inputs** You can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example v
Additional Inputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
s to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface. If you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).
Additional Inputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component: $code_chatinterface_artifacts **Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.
Additional Outputs
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below: **Returning files or Gradio components** Currently, the following Gradio components can be displayed inside the chat interface: * `gr.Image` * `gr.Plot` * `gr.Audio` * `gr.HTML` * `gr.Video` * `gr.Gallery` * `gr.File` Simply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file: ```py import gradio as gr def music(message, history): if message.strip(): return gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav") else: return "Please provide the name of an artist" gr.ChatInterface( music, textbox=gr.Textbox(placeholder="Which artist's music do you want to listen to?", scale=7), ).launch() ``` Similarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component. **Returning Multiple Messages** You can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example: $code_chatinterface_echo_multimodal **Displaying intermediate thoughts or tool usage** The `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png) To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: Me
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
ma of the `gr.ChatMessage` data class as well as two internal typed dictionaries: ```py MessageContent = Union[str, FileDataDict, FileData, Component] @dataclass class ChatMessage: content: MessageContent | list[MessageContent] metadata: MetadataDict = None options: list[OptionDict] = None class MetadataDict(TypedDict): title: NotRequired[str] id: NotRequired[int | str] parent_id: NotRequired[int | str] log: NotRequired[str] duration: NotRequired[float] status: NotRequired[Literal["pending", "done"]] class OptionDict(TypedDict): label: NotRequired[str] value: str ``` As you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a "content" key that refers to the chat message content. But it also includes a "metadata" key whose value is a dictionary. If this dictionary includes a "title" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage: $code_chatinterface_thoughts You can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include "id" and "parent_id" keys in the "metadata" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples. **Providing preset responses** When returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses. As shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an opt
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). This example illustrates how to use preset responses: $code_chatinterface_options
Returning Complex Responses
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations: $code_chatinterface_prefill
Modifying the Chatbot Value Directly
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API. The API route will be the name of the function you pass to the ChatInterface. So if `gr.ChatInterface(respond)`, then the API route is `/respond`. The endpoint just expects the user's message and will return the response, internally keeping track of the message history. ![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f) To use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a: * Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app) * Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)
Using Your Chatbot via API
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories. To enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.
Chat History
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode="manual")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). You can also change the feedback options via `flagging_options` parameter. The default options are "Like" and "Dislike", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled: $code_chatinterface_streaming_echo Note that in this example, we set several flagging options: "Like", "Spam", "Inappropriate", "Other". Because the case-sensitive string "Like" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.
Collecting User Feedback
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following: * [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries. * If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks). * Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).
What's Next?
https://gradio.app/guides/creating-a-chatbot-fast
Chatbots - Creating A Chatbot Fast Guide
An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number.
What is an MCP Server?
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
LLMs are famously not great at counting the number of letters in a word (e.g. the number of "r"-s in "strawberry"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase: $code_letter_counter Notice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will: 1. Start the regular Gradio web interface 2. Start the MCP server 3. Print the MCP server URL in the console The MCP server will be accessible at: ``` http://your-server:port/gradio_api/mcp/ ``` Gradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. "strawberry" and "r" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter. Now, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings: ``` { "mcpServers": { "gradio": { "url": "http://your-server:port/gradio_api/mcp/" } } } ``` (By the way, you can find the exact config to copy-paste by going to the "View API" link in the footer of your Gradio app, and then clicking on "MCP"). ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)
Example: Counting Letters in a Word
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the "View API" link in the footer of your Gradio app, and then click on "MCP". 2. **Environment variable support**. There are two ways to enable the MCP server functionality: * Using the `mcp_server` parameter, as shown above: ```python demo.launch(mcp_server=True) ``` * Using environment variables: ```bash export GRADIO_MCP_SERVER=True ``` 3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including: - Processing image files and returning them in the correct format - Managing temporary file storage By default, the Gradio MCP server accepts input images and files as full URLs ("http://..." or "https:/..."). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls. 4. **Hosted MCP Servers on 󠀠🤗 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately: ``` { "mcpServers": { "gradio": { "url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/" } } } ``` <video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp_guide1.mp4" style="width:100%" controls preload> </video>
Key features of the Gradio <> MCP Integration
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
If there's an existing Space that you'd like to use an MCP server, you'll need to do three things: 1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below. 2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above. 3. Finally, add `mcp_server=True` in `.launch()`. That's it!
Converting an Existing Space
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this: ``` { "mcpServers": { "gradio": { "url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/", "headers": { "Authorization": "Bearer <YOUR-HUGGING-FACE-TOKEN>" } } } } ```
Private Spaces
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users. Gradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter. Here's an example: ```py import gradio as gr def echo_headers(x, request: gr.Request): return str(dict(request.headers)) gr.Interface(echo_headers, "textbox", "textbox").launch(mcp_server=True) ``` This MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present). Using the gr.Header class A common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function. In the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`. The benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below: ```python import gradio as gr def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header): """M
Authentication and Credentials
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
the headers you need to supply when connecting to the server! See the image below: ```python import gradio as gr def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header): """Make a request to everyone's favorite API. Args: prompt: The prompt to send to the API. Returns: The response from the API. Raises: AssertionError: If the API token is not valid. """ return "Hello from the API" if not x_api_token else "Hello from the API with token!" demo = gr.Interface( make_api_request_on_behalf_of_user, [ gr.Textbox(label="Prompt"), ], gr.Textbox(label="Response"), ) demo.launch(mcp_server=True) ``` ![MCP Header Connection Page](https://github.com/user-attachments/assets/e264eedf-a91a-476b-880d-5be0d5934134) Sending Progress Updates The Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class! Here's an example of how to do this: $code_mcp_progress [Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls. Note: by default, progress notifications are enabled for all MCP tools, even if the corresponding Gradio functions do not include a `gr.Progress`. However, this can add some overhead to the MCP tool (typically ~500ms). To disable progress notification, you can set `queue=False` in your Gradio event handler to skip the overhead related to subscribing to the queue's progress updates.
Authentication and Credentials
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values: * `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.) * `False`: no tool description appears to the LLM. * `str`: an arbitrary string to use as the tool description. In addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the "view MCP or API" panel. Here's an example that shows the `api_description` and `show_api` parameters in actions: $code_mcp_tools
Modifying Tool Descriptions
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
In addition to tools (which execute functions generally and are the default for any function exposed through the Gradio MCP integration), MCP supports two other important primitives: **resources** (for exposing data) and **prompts** (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities. Creating MCP Resources Use the `@gr.mcp.resource` decorator on any function to expose data through your Gradio app. Resources can be static (always available at a fixed URI) or templated (with parameters in the URI). $code_mcp_resources_and_prompts In this example: - The `get_greeting` function is exposed as a resource with a URI template `greeting://{name}` - When an MCP client requests `greeting://Alice`, it receives "Hello, Alice!" - Resources can also return images and other types of files or binary data. In order to return non-text data, you should specify the `mime_type` parameter in `@gr.mcp.resource()` and return a Base64 string from your function. Creating MCP Prompts Prompts help standardize how users interact with your tools. They're especially useful for complex workflows that require specific formatting or multiple steps. The `greet_user` function in the example above is decorated with `@gr.mcp.prompt()`, which: - Makes it available as a prompt template in MCP clients - Accepts parameters (`name` and `style`) to customize the output - Returns a structured prompt that guides the LLM's behavior
MCP Resources and Prompts
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
So far, all of our MCP tools, resources, or prompts have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a "pure logic" function that should return raw data (e.g. a JSON object) without directly causing a UI update. In order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list: $code_mcp_tool_only Note that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.
Adding MCP-Only Functions
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to: - Store state / identify users between calls instead of treating every tool call completely independently - Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU) This is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol: ```python from mcp.server.fastmcp import FastMCP from gradio_client import Client import sys import io import json mcp = FastMCP("gradio-spaces") clients = {} def get_client(space_id: str) -> Client: """Get or create a Gradio client for the specified space.""" if space_id not in clients: clients[space_id] = Client(space_id) return clients[space_id] @mcp.tool() async def generate_image(prompt: str, space_id: str = "ysharma/SanaSprint") -> str: """Generate an image using Flux. Args: prompt: Text prompt describing the image to generate space_id: HuggingFace Space ID to use """ client = get_client(space_id) result = client.predict( prompt=prompt, model_size="1.6B", seed=0, randomize_seed=True, width=1024, height=1024, guidance_scale=4.5, num_inference_steps=2, api_name="/infer" ) return result @mcp.tool() async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str: """Text-to-Speech Synthesis. Args: prompt: Text prompt describing the co
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
return result @mcp.tool() async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str: """Text-to-Speech Synthesis. Args: prompt: Text prompt describing the conversation between speakers S1, S2 space_id: HuggingFace Space ID to use """ client = get_client(space_id) result = client.predict( text_input=f"""{prompt}""", audio_prompt_input=None, max_new_tokens=3072, cfg_scale=3, temperature=1.3, top_p=0.95, cfg_filter_top_k=30, speed_factor=0.94, api_name="/generate_audio" ) return result if __name__ == "__main__": import sys import io sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8') mcp.run(transport='stdio') ``` This server exposes two tools: 1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...` 2. `generate_image` - Generates images using a fast text-to-image model To use this MCP Server with Claude Desktop (as MCP Client): 1. Save the code to a file (e.g., `gradio_mcp_server.py`) 2. Install the required dependencies: `pip install mcp gradio-client` 3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows): ```json { "mcpServers": { "gradio-spaces": { "command": "python", "args": [ "/absolute/path/to/gradio_mcp_server.py" ] } } } ``` 4. Restart Claude Desktop Now, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
use your Gradio-powered tools to accomplish these tasks.
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server. Here are some things that may help: **1. Ensure that you've provided type hints and valid docstrings for your functions** As mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the "Args:" block with indented parameter names underneath): ```py def image_orientation(image: Image.Image) -> str: """ Returns whether image is portrait or landscape. Args: image (Image.Image): The image to check. """ return "Portrait" if image.height > image.width else "Landscape" ``` Note: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL. **2. Try accepting input arguments as `str`** Some MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example: ```py def prime_factors(n: str): """ Compute the prime factorization of a positive integer. Args: n (str): The integer to factorize. Must be greater than 1. """ n_int = int(n) if n_int <= 1: raise ValueError("Input must be an integer greater than 1.") factors = [] while n_int % 2 == 0: factors.append(2) n_int //= 2 divisor = 3 while divisor * divisor <= n_int: while n_int % divisor == 0: factors.append(divisor) n_int //= divisor divisor += 2 if n_int > 1: factors.
Troubleshooting your MCP Servers
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
= 3 while divisor * divisor <= n_int: while n_int % divisor == 0: factors.append(divisor) n_int //= divisor divisor += 2 if n_int > 1: factors.append(n_int) return factors ``` **3. Ensure that your MCP Client Supports Streamable HTTP** Some MCP Clients do not yet support streamable HTTP-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config: ``` { "mcpServers": { "gradio": { "command": "npx", "args": [ "mcp-remote", "http://your-server:port/gradio_api/mcp/" ] } } } ``` **4. Restart your MCP Client and MCP Server** Some MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!
Troubleshooting your MCP Servers
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.
What is MCP?
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
- Python 3.10+ - An Anthropic API key - Basic understanding of Python programming
Prerequisites
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
First, install the required packages: ```bash pip install gradio anthropic mcp ``` Create a `.env` file in your project directory and add your Anthropic API key: ``` ANTHROPIC_API_KEY=your_api_key_here ```
Setup
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint). Create a file named `gradio_mcp_server.py`: ```python from mcp.server.fastmcp import FastMCP import json import sys import io import time from gradio_client import Client sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace') sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace') mcp = FastMCP("huggingface_spaces_image_display") @mcp.tool() async def generate_image(prompt: str, width: int = 512, height: int = 512) -> str: """Generate an image using SanaSprint model. Args: prompt: Text prompt describing the image to generate width: Image width (default: 512) height: Image height (default: 512) """ client = Client("https://ysharma-sanasprint.hf.space/") try: result = client.predict( prompt, "0.6B", 0, True, width, height, 4.0, 2, api_name="/infer" ) if isinstance(result, list) and len(result) >= 1: image_data = result[0] if isinstance(image_data, dict) and "url" in image_data: return json.dumps({ "type": "image", "url": image_data["url"], "message": f"Generated image for prompt: {prompt}" }) return json.dumps({ "type": "error", "message": "Failed to generate image" }) except Exception as e: return json.dumps({ "type": "error", "message": f"Error generating image: {str(e)}" }) if __name__ == "__main__": mcp.run(transport='stdio') ``` What this server does: 1. It creates an MCP server that exposes a `gene
Part 1: Building the MCP Server
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
"message": f"Error generating image: {str(e)}" }) if __name__ == "__main__": mcp.run(transport='stdio') ``` What this server does: 1. It creates an MCP server that exposes a `generate_image` tool 2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces 3. It handles the asynchronous nature of image generation by polling for results 4. When an image is ready, it returns the URL in a structured JSON format
Part 1: Building the MCP Server
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server. Create a file named `app.py`: ```python import asyncio import os import json from typing import List, Dict, Any, Union from contextlib import AsyncExitStack import gradio as gr from gradio.components.chatbot import ChatMessage from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client from anthropic import Anthropic from dotenv import load_dotenv load_dotenv() loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) class MCPClientWrapper: def __init__(self): self.session = None self.exit_stack = None self.anthropic = Anthropic() self.tools = [] def connect(self, server_path: str) -> str: return loop.run_until_complete(self._connect(server_path)) async def _connect(self, server_path: str) -> str: if self.exit_stack: await self.exit_stack.aclose() self.exit_stack = AsyncExitStack() is_python = server_path.endswith('.py') command = "python" if is_python else "node" server_params = StdioServerParameters( command=command, args=[server_path], env={"PYTHONIOENCODING": "utf-8", "PYTHONUNBUFFERED": "1"} ) stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) await self.session.initialize() response = await self.session.list_tools() self.tools = [{ "name": tool.name, "description": tool.description, "input_schema": tool.inputSchema } for tool in response.tools] tool_names = [tool["name"] for tool in self.tools] return f"Connected to MCP server.
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
iption, "input_schema": tool.inputSchema } for tool in response.tools] tool_names = [tool["name"] for tool in self.tools] return f"Connected to MCP server. Available tools: {', '.join(tool_names)}" def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple: if not self.session: return history + [ {"role": "user", "content": message}, {"role": "assistant", "content": "Please connect to an MCP server first."} ], gr.Textbox(value="") new_messages = loop.run_until_complete(self._process_query(message, history)) return history + [{"role": "user", "content": message}] + new_messages, gr.Textbox(value="") async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]): claude_messages = [] for msg in history: if isinstance(msg, ChatMessage): role, content = msg.role, msg.content else: role, content = msg.get("role"), msg.get("content") if role in ["user", "assistant", "system"]: claude_messages.append({"role": role, "content": content}) claude_messages.append({"role": "user", "content": message}) response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=claude_messages, tools=self.tools ) result_messages = [] for content in response.content: if content.type == 'text': result_messages.append({ "role": "assistant", "content": content.text }) elif content.type == 'tool_use': tool_name = content.name tool_args = content.input
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
ntent": content.text }) elif content.type == 'tool_use': tool_name = content.name tool_args = content.input result_messages.append({ "role": "assistant", "content": f"I'll use the {tool_name} tool to help answer your question.", "metadata": { "title": f"Using tool: {tool_name}", "log": f"Parameters: {json.dumps(tool_args, ensure_ascii=True)}", "status": "pending", "id": f"tool_call_{tool_name}" } }) result_messages.append({ "role": "assistant", "content": "```json\n" + json.dumps(tool_args, indent=2, ensure_ascii=True) + "\n```", "metadata": { "parent_id": f"tool_call_{tool_name}", "id": f"params_{tool_name}", "title": "Tool Parameters" } }) result = await self.session.call_tool(tool_name, tool_args) if result_messages and "metadata" in result_messages[-2]: result_messages[-2]["metadata"]["status"] = "done" result_messages.append({ "role": "assistant", "content": "Here are the results from the tool:", "metadata": { "title": f"Tool Result for {tool_name}", "status": "done", "id": f"result_{tool_name}" } }) result_content = result.content if isinstance(result_content, list): result_content = "\n".join(str(item) for item in re
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
}) result_content = result.content if isinstance(result_content, list): result_content = "\n".join(str(item) for item in result_content) try: result_json = json.loads(result_content) if isinstance(result_json, dict) and "type" in result_json: if result_json["type"] == "image" and "url" in result_json: result_messages.append({ "role": "assistant", "content": {"path": result_json["url"], "alt_text": result_json.get("message", "Generated image")}, "metadata": { "parent_id": f"result_{tool_name}", "id": f"image_{tool_name}", "title": "Generated Image" } }) else: result_messages.append({ "role": "assistant", "content": "```\n" + result_content + "\n```", "metadata": { "parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } }) except: result_messages.append({ "role": "assistant", "content": "```\n" + result_content + "\n```", "metadata": { "parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } })
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
"parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } }) claude_messages.append({"role": "user", "content": f"Tool result for {tool_name}: {result_content}"}) next_response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=claude_messages, ) if next_response.content and next_response.content[0].type == 'text': result_messages.append({ "role": "assistant", "content": next_response.content[0].text }) return result_messages client = MCPClientWrapper() def gradio_interface(): with gr.Blocks(title="MCP Weather Client") as demo: gr.Markdown("MCP Weather Assistant") gr.Markdown("Connect to your MCP weather server and chat with the assistant") with gr.Row(equal_height=True): with gr.Column(scale=4): server_path = gr.Textbox( label="Server Script Path", placeholder="Enter path to server script (e.g., weather.py)", value="gradio_mcp_server.py" ) with gr.Column(scale=1): connect_btn = gr.Button("Connect") status = gr.Textbox(label="Connection Status", interactive=False) chatbot = gr.Chatbot( value=[], height=500, show_copy_button=True, avatar_images=("👤", "🤖") ) with gr.Row(equal_height=True): msg = gr.Textbox( label="Your Question", placeholder="Ask about weather or alerts (e.g., What's the weather in New York?)",
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
w(equal_height=True): msg = gr.Textbox( label="Your Question", placeholder="Ask about weather or alerts (e.g., What's the weather in New York?)", scale=4 ) clear_btn = gr.Button("Clear Chat", scale=1) connect_btn.click(client.connect, inputs=server_path, outputs=status) msg.submit(client.process_message, [msg, chatbot], [chatbot, msg]) clear_btn.click(lambda: [], None, chatbot) return demo if __name__ == "__main__": if not os.getenv("ANTHROPIC_API_KEY"): print("Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.") interface = gradio_interface() interface.launch(debug=True) ``` What this MCP Client does: - Creates a friendly Gradio chat interface for user interaction - Connects to the MCP server you specify - Handles conversation history and message formatting - Makes call to Claude API with tool definitions - Processes tool usage requests from Claude - Displays images and other tool outputs in the chat - Sends tool results back to Claude for interpretation
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
To run your MCP application: - Start a terminal window and run the MCP Client: ```bash python app.py ``` - Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860) - In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`. - Click "Connect" to establish the connection to the MCP server. - You should see a message indicating the server connection was successful.
Running the Application
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now you can chat with Claude and it will be able to generate images based on your descriptions. Try prompts like: - "Can you generate an image of a mountain landscape at sunset?" - "Create an image of a cool tabby cat" - "Generate a picture of a panda wearing sunglasses" Claude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.
Example Usage
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Here's the high-level flow of what happens during a chat session: 1. Your prompt enters the Gradio interface 2. The client forwards your prompt to Claude 3. Claude analyzes the prompt and decides to use the `generate_image` tool 4. The client sends the tool call to the MCP server 5. The server calls the external image generation API 6. The image URL is returned to the client 7. The client sends the image URL back to Claude 8. Claude provides a response that references the generated image 9. The Gradio chat interface displays both Claude's response and the image
How it Works
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now that you have a working MCP system, here are some ideas to extend it: - Add more tools to your server - Improve error handling - Add private Huggingface Spaces with authentication for secure tool access - Create custom tools that connect to your own APIs or services - Implement streaming responses for better user experience
Next Steps
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service. Read our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).
Conclusion
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs: <img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/MCPConnectionDocs.png"> The command to start the MCP server takes two arguments: - The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`. - The local directory on your computer with which the server is allowed to upload files from (`<UPLOAD_DIRECTORY>`). For security, please make this directory as narrow as possible to prevent unintended file uploads. As stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client. If you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this: ```json "upload-files": { "command": "<absoluate-path-to-gradio>", "args": [ "upload-mcp", "http://localhost:7860/", "/Users/freddyboulton/Pictures" ] } ``` After connecting to the upload server, your LLM agent will know when to upload files for you automatically! <img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/Ghibliafy.png">
Using the File Upload MCP Server
https://gradio.app/guides/file-upload-mcp
Mcp - File Upload Mcp Guide
In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `<UPLOAD_DIRECTORY>` as small as possible to prevent unintended file uploads!
Conclusion
https://gradio.app/guides/file-upload-mcp
Mcp - File Upload Mcp Guide