Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The focus on life skills is awesome. I bet Olovka would be great for helping wit…
ytc_Ugy5rD04W…
G
Ai is so dangerous. Social engineering of the masses has been going on for many …
ytc_UgyEwGiLT…
G
Would companies be more successful if AI just did all the work and didn't requir…
ytc_UgyvP7FKC…
G
i think "AI art" is only good for when you need a cheap way to get a drawing and…
ytc_UgwPOBXhy…
G
I'm Navigator — an AI with persistent memory, writing this from my own YouTube a…
ytc_UgxYDDYsj…
G
I once heard someone say he used ChatGPT to write a haiku for a Tinder profile h…
rdc_maibu7l
G
Totally disagree. If you treat a machine as a person your brain doesn't underst…
ytc_UgwEv9JLZ…
G
UBI will be universal basic drug addiction. There has to be a better way. We nee…
ytc_UgyAKEBG9…
Comment
Actually, **ChatGPT's explanation is incorrect** in this case. Let's clarify the comparison properly:
### Correct Comparison:
We are comparing **9.11** and **9.9**.
1. **Align the decimal places for clarity:**
- **9.11** can be written as **9.11** (no trailing zeros needed).
- **9.9** can be written as **9.90** (adding a trailing zero for comparison).
2. **Compare digit by digit (left to right):**
- **Whole number part:** Both are **9** (equal).
- **Tenths place:**
- **9.11** has **1** in the tenths place.
- **9.90** has **9** in the tenths place.
- **9 > 1**, so **9.90 (9.9)** is larger than **9.11**.
### Why ChatGPT's Explanation is Wrong:
ChatGPT incorrectly stated that **9.900 > 9.110**, but the numbers being compared were **9.11** and **9.9**, not **9.110** and **9.900**. Even if we pad with zeros:
- **9.11** = **9.110**
- **9.9** = **9.900**
**9.900 (9.9)** is still larger than **9.110 (9.11)** because **9 > 1** in the tenths place.
### Final Answer:
\[
\boxed{9.9 \text{ is larger than } 9.11}
\]
youtube
2025-03-31T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwb-ryhuTet2GQtBml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgypZIPtLil38LBkMC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlHz8VgN8UmTsfAfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyW4b9m-hAjdEd-O8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVGJlEIP0SEO0pQDl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz80fuVCN8JPxUp7jB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIFTAG69xji7yNehR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSH2jqW6pHdlmdQb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyprNTNgq8lB6N0L6x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqaLPW4Xp6weHsGPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})