Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Human intelligence was formed over generations that occurred over 20 years per g…
ytc_UgwKNQy1R…
G
Is this the same AI that was told to freeze production, but deleted the database…
rdc_n4f1ybu
G
Okay first thing about the serpent Python is a program language that is commonly…
ytc_Ugz1SHI4M…
G
If the government leaves AI unchecked, it will cause a collapse: only the very p…
ytc_UgwAMOLda…
G
Starting 10:44 -
> If the edge case is not in the simulation you have not real…
ytc_UgxFT5-AA…
G
Thinking you can make AI safe is stupid foolish diabolical naive delusional and …
ytc_UgwqaRhmb…
G
Could we pose the problem of the danger of AI to the AI in cloaked way?…
ytc_Ugxq28CN8…
G
-She is absolutely right about technology forcing us into "open air prisons".
-S…
ytc_UgyXJqiB9…
Comment
AI art would be fine if they didn't rip off all the real artists who art they used to train models. If real artists were consented and commissioned fairly it would be fine, to give the artist the choice, a real artist could be hired to train the AI, for example and for the AI to not become a complete regurgitation of itself like the algorithm now would continuously need new 'real' art to gather from.
youtube
Viral AI Reaction
2025-05-15T20:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzsJC-CaHymk5I6IvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWs5JJnJc9gPzPImB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgygcHu5fhNQmyvrW_54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyeWDSLQcYCKqbpwll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVsuQ6zmrj5OOjuft4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIjM-fqOFo2Ug7vkB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTEEEg18gYFxb3ja94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQ94oGYGSHf2x1wZ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgyhRrK-WykR8jIbM_F4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw_GI6z4bhiZGq9gJd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"}
]