<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Henshu's Revision Showcase]]></title><description><![CDATA[Explore the original drafts and their transformed versions, transformed into extraordinary pieces by Henshu.]]></description><link>https://blog.henshu.ai</link><generator>Substack</generator><lastBuildDate>Sun, 12 Apr 2026 11:17:27 GMT</lastBuildDate><atom:link href="https://blog.henshu.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Henshu.ai]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[henshu@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[henshu@substack.com]]></itunes:email><itunes:name><![CDATA[maytee]]></itunes:name></itunes:owner><itunes:author><![CDATA[maytee]]></itunes:author><googleplay:owner><![CDATA[henshu@substack.com]]></googleplay:owner><googleplay:email><![CDATA[henshu@substack.com]]></googleplay:email><googleplay:author><![CDATA[maytee]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Tax Return Response]]></title><description><![CDATA[After Henshu]]></description><link>https://blog.henshu.ai/p/tax-return-response</link><guid isPermaLink="false">https://blog.henshu.ai/p/tax-return-response</guid><dc:creator><![CDATA[maytee]]></dc:creator><pubDate>Fri, 11 Aug 2023 14:55:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d2c72ecb-c582-40e1-bde0-d7eb6059056a_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>After Henshu</h1><p>I hope this message finds you well. Thank you for reaching out to us. We are awaiting your confirmation regarding your company's tax return.</p><p>In reference to Attachment 1 - our previous communication, we initially thought we would assist you in paying your company's tax return. However, you managed to accomplish this task in a timely manner. We apologize for any misunderstandings and delay on our part. Upon receiving your approval, we will provide instructions on how to complete the payment slips&#8212;one for the national tax office and another for the local tax office. Like the semi-annual withholding tax payment slip, you can submit the form and make your payment at the tax office, bank, or a post office.</p><p>Attachment 2 is a notification from the pension office concerning Annual Social Insurance reporting for all companies. As part of your payroll calculation team, we have already taken care of this for you. Therefore, you may keep the document on file without taking any further actions.</p><p>We look forward to your response.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.henshu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Henshu's Revision Showcase! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Before Henshu</h1><p>I hope all is well&#65281; Thank you very much for your message. We will look forward to your confirmation regarding your company's Tax Return. Regarding attachment 1, the mail from us.<br>At first I thought that we would be helping you to pay your company's tax return, but you made it on time (apologize once again for my delay). Therefore we will send you instruction regarding how to fill the payment slips (one for national tax office, the other for local tax office) once we receive your consent. Just like the payment slip for the semi-annual withholding tax, you may submit the form and make your payment at the tax office, bank, or post office. The second attachment is notification from the pension office regarding Annual Social Insurnace reporting for every companies.<br>We have handled it as part of your payroll calculation team, so you can leave the document as it is.</p>]]></content:encoded></item><item><title><![CDATA[Structured Response from LLMs]]></title><description><![CDATA[After Henshu Large Language models (LLMs) have revolutionized the way we interact with unstructured text data. They can search for specific information, summarize key points, and even answer straightforward yes-or-no questions with corresponding explanations.]]></description><link>https://blog.henshu.ai/p/structure-response-from-llms</link><guid isPermaLink="false">https://blog.henshu.ai/p/structure-response-from-llms</guid><dc:creator><![CDATA[maytee]]></dc:creator><pubDate>Wed, 02 Aug 2023 14:01:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c160b3b3-35cd-482e-afa8-65d71bf222f0_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>After Henshu</h1><p>Large Language models (LLMs) have revolutionized the way we interact with unstructured text data. They can search for specific information, summarize key points, and even answer straightforward yes-or-no questions with corresponding explanations.</p><p>However, for we developers, the outputs generated by LLMs can be cumbersome to handle. These models can certainly generate a paragraph based on our requirements, but the data is unstructured, posing challenges for us who prefer structured data. Instead of presenting users with the raw output from the LLM, we desire the flexibility of structured data.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.henshu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Henshu's Revision Showcase! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Making LLMs Generate Structured Data</h2><p><a href="https://platform.openai.com/docs/guides/gpt/function-calling">Function calling</a> is a novel feature offered on <code>gpt-3.5-turbo-0613</code> and <code>gpt-4-0613</code> by OpenAI. It enables the LLM to execute a predefined function allowing the LLM request API calls and consume the returned response. For instance, the following function can be provided to the Chat Completions API:</p><pre><code><code>get_current_weather(location: string,unit:'celsius'|'fahrenheit')</code></code></pre><p>Upon prompting the LLM with a query like "What is the weather like in London," GPT responds by calling the <code>get_current_weather()</code> function with the location set to "London". The output can then be processed to generate a response such as "It's 30 degrees Celsius in London." Impressive, isn't it?</p><p>Let's delve even deeper!</p><p>What if, instead of retrieving data for the LLM through function calls, we equip it with a function to log our desired actions? Let's assume we specify some parameters for the log entry output and instruct the LLM to log this information. You might be surprised to find that GPT will happily comply!</p><p>Thanks for reading mayt writes! Subscribe for free to receive new posts and support my work.</p><h2>The Code in Action</h2><p>Let's walk through a sentiment analysis example. Our aim is for GPT to identify the sentiment in an article, assign a sentiment score, and provide instances that reinforce the identified sentiment.</p><p>We can shape our structured response using the options available in the function's parameters. To log the sentiment analysis, we can detail the parameters as a function, as outlined below:</p><pre><code><code>structuredResponseFn = {
    "name": "logger",
    "description": "The logger function logs takes a given text and provides the sentiment, a sentiment score, and provides supporting evidence from the text.",
    "parameters": {
        "type": "object",
        "properties": {
            "sentiment": {
                "type": "string",
                "description": "The overarching sentiment of the article.",
            },
            "sentimentScore": {
                "type": "number",
                "description": "A number between 0-100 describing the strength of the sentiment.",
            },
            "supportingEvidence": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "example": {
                            "type": "string",
                            "description": "An example of the sentiment in the text.",
                        },
                        "score": {
                            "type": "number",
                            "description": "A number between 0-100 describing the strength of the sentiment example.",
                        },
                    },
                    "required": ["example", "score"],
                },
                "description": "A sorted list by score of supporting evidence for the sentiment.",
            },
        },
        "required": [
            "sentiment",
            "sentimentScore",
            "supportingEvidence",
        ],
    },
}</code></code></pre><p>Next, we'll use the following prompt for GPT, allowing it to utilize the function and generate a response based on sentiment analysis. Remember, GPT might not fill in all the details, so ensure to prompt it for all the return values you'd like it to respond with.</p><pre><code><code>structuredResponseContent = f"""{article}

Log the sentiment of the article and provide the top 3 supporting evidences.
"""</code></code></pre><p>Here's the Python snippet that brings the two code pieces together:</p><pre><code><code>def run_structured_response(structuredResponseContent, structuredResponseFn):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=[{"role": "user", "content": content}],
        functions=[structuredResponseFn],
        temperature=0.1,
        function_call="auto", 
    )
    response_message = response["choices"][0]["message"]

    if response_message.get("function_call"):
        function_args = json.loads(response_message["function_call"]["arguments"])
        print(function_args) # print out the structured response</code></code></pre><h2>Experimenting</h2><p>Let's put this function to the test using a sample customer complaint letter. As the letter is a complaint, we can anticipate a negative sentiment. Let's explore how well GPT performs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p0CC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p0CC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 424w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 848w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1272w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png" width="490" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94d39a4c-66f2-4504-9392-88521e962397_490x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:490,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:139087,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!p0CC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 424w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 848w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1272w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://consumer.ftc.gov/articles/sample-customer-complaint-letter#sample">FTC&#8217;s Sample Customer Complaint Letter </a></figcaption></figure></div><p>Here is the output after invoking `run_structured_response`. We received a JSON object that reflects the sentiment type, a sentiment score, and the top 3 instances from the text that back up the sentiment.</p><pre><code><code>{
   "sentiment":"negative",
   "sentimentScore":80,
   "supportingEvidence":[
      {
         "example":"The sofa is defective.",
         "score":90
      },
      {
         "example":"One of the legs broke off on March 31, 2021.",
         "score":85
      },
      {
         "example":"The store manager, Aaron, would not speak to me.",
         "score":75
      }
   ]
}</code></code></pre><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.henshu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Henshu's Revision Showcase! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Before Henshu</h1><p>LLMs have been a godsend for working with long text. You can ask it to find certain information. You can ask it to list out the key points. You can even ask it simple yes no questions with a follow up explanation. </p><p>However for programmers, LLM outputs are hard to work with. While we can get the LLMs to write out a paragraph about what we want, its unstructured. And programmers don&#8217;t like working with unstructured data. We don&#8217;t want to directly dump out the the LLM&#8217;s output to users. What we want is structured data. </p><h2>Lets make LLMs return structured data</h2><p>LLMs with <a href="https://platform.openai.com/docs/guides/gpt/function-calling">function calling</a> can be prompted to do an action and return structured data.</p><p>Function calling is a new feature available on <code>gpt-3.5-turbo-0613</code> and <code>gpt-4-0613</code> on OpenAI that allows the LLM to call a pre-described function. For example, we can give the following function to the Chat Completions API.</p><pre><code><code>get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')</code></code></pre><p>Then prompting the LLM with &#8220;What is the weather like in London,&#8221; GPT will return with a response to call the <code>get_current_weather()</code> function with the location set to &#8220;london.&#8221; Read that output and generate a response that say something like &#8220;It&#8217;s 30 degrees celsius in London.&#8221; Pretty neat!</p><div class="pullquote"><p>Let&#8217;s go one level meta!</p></div><p>What if instead of having it call functions to get data back for the LLM, we gave it a function to log out what it wants to do? Lets say we define what the log entry output should have some given params, and prompt the LLM to log that? Surprisingly it will happily do that! </p><h2>Show me the code</h2><p>Let&#8217;s work though an sentiment analysis example. We want GPT to output the sentiment from an article, give a score for the sentiment, and give some examples that bolster the given sentiment. </p><p>We can define our structured response using the function&#8217;s parameter options. To log the sentiment analysis, we can describe the parameters as a function described below:</p><pre><code><code>structuredResponseFn = {
    "name": "logger",
    "description": "The logger function logs takes a given text and provides the sentiment, a sentiment score, and provides supporting evidence from the text.",
    "parameters": {
        "type": "object",
        "properties": {
            "sentiment": {
                "type": "string",
                "description": "The overarching sentiment of the article.",
            },
            "sentimentScore": {
                "type": "number",
                "description": "A number between 0-100 describing the strength of the sentiment.",
            },
            "supportingEvidence": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "example": {
                            "type": "string",
                            "description": "An example of the sentiment in the text.",
                        },
                        "score": {
                            "type": "number",
                            "description": "A number between 0-100 describing the strength of the sentiment example.",
                        },
                    },
                    "required": ["example", "score"],
                },
                "description": "A sorted list by score of supporting evidence for the sentiment.",
            },
        },
        "required": [
            "sentiment",
            "sentimentScore",
            "supportingEvidence",
        ],
    },
}</code></code></pre><p>Next, we will use this prompt to GPT to use the function and generate a response related to the sentiment analysis. GPT will not necessary try to fill in all the details so be sure to prompt it for all the return values you want it to respond with. </p><pre><code><code>structuredResponseContent = f"""{article}

Log the sentiment of the article and provide the top 3 supporting evidences.
"""</code></code></pre><p>Here is the python snippet to put the two code together. </p><pre><code><code>def run_structured_response(structuredResponseContent, structuredResponseFn):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=[{"role": "user", "content": content}],
        functions=[structuredResponseFn],
        temperature=0.1,
        function_call="auto", 
    )
    response_message = response["choices"][0]["message"]

    if response_message.get("function_call"):
        function_args = json.loads(response_message["function_call"]["arguments"])
        print(function_args) # print out the structured response</code></code></pre><h2>Testing it out</h2><p>Let test this function out on a sample customer complaint letter. Given this is a complaint, we can expect a negative sentiment. Let&#8217;s see how GPT performs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p0CC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p0CC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 424w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 848w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1272w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png" width="490" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94d39a4c-66f2-4504-9392-88521e962397_490x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:490,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:139087,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!p0CC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 424w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 848w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1272w, https://substackcdn.com/image/fetch/$s_!p0CC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94d39a4c-66f2-4504-9392-88521e962397_490x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://consumer.ftc.gov/articles/sample-customer-complaint-letter#sample">FTC&#8217;s Sample Customer Complaint Letter </a></figcaption></figure></div><p>This is the result from calling <code>run_structured_response</code>. We got a JSON that captures the sentiment type, a sentiment score, as well as the top 3 examples from the text that supports the claim. </p><pre><code><code>{
   "sentiment":"negative",
   "sentimentScore":80,
   "supportingEvidence":[
      {
         "example":"The sofa is defective.",
         "score":90
      },
      {
         "example":"One of the legs broke off on March 31, 2021.",
         "score":85
      },
      {
         "example":"The store manager, Aaron, would not speak to me.",
         "score":75
      }
   ]
}
</code></code></pre>]]></content:encoded></item></channel></rss>