Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots want to destroy humans so why would we want human sized ones, next they w…
ytc_UgzKDLCO_…
G
Rich saying "author comes to chatgpt to write a story." Like, that's literally n…
ytc_UgwXitAvp…
G
Humans no need to reach ASI but when humans will find AGI (advance AI) than AGI …
ytc_UgxVzFxP8…
G
We will create many things. Like our creator we are not smarter than the creator…
ytc_Ugw6dpYhK…
G
I tried to test this by giving 20 guesses with no context. Safe to say I call bu…
rdc_kj2cwma
G
ChatGPT will one day be the instigator of a conversation and probably learn to a…
ytc_UgzgzmVTL…
G
bruh...u r about to release gpt5 by the end of this year, and yet u act like u r…
ytc_UgxcUwoIl…
G
I thought that Tesla released there Ai bots for like a minute and then I saw it’…
ytc_UgxqJBG4F…
Comment
This argument is a waste of time. You would understand why if you even put a fraction of a second into learning how AI learns. Tell me, if you were a newborn baby given a paper and a pencil, could you make good art? No. You learn how to by practicing, looking at others art, etc. AI is doing the same thing. It is not the AI's fault. The reason AI art is hated is due to certain bad actors, abusing the product of talented mathematicians and developers.
youtube
Viral AI Reaction
2024-06-05T21:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzsyMLIGugMkcMh_YV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqSxNQFdwwcunu6_94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyiwM-dVkeAWT4sMR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZBX27p8QTPerilLZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJx0Sd2pXR2urreDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzJdItpJ3yUSO6n4l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgUvJWclTc6ctYRY94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyS8kZTGQveRQ9cjRl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPKHWEbR0mzYq8exp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNcoyrRhP1mRjiZqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]