Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is awesome how he doesn’t want to think about the future. Because he is afr…
ytc_UgyfqoTfC…
G
only way to train those guard rails you guys say, must be done by programming on…
ytc_UgzTLP9q-…
G
i am convinced that AI won't add shitty music that annoys you so much that you c…
ytc_UgzTfAiYA…
G
PDATE AGI hasn't been established yet, but AI isn't sentient as we all have been…
ytc_UgyhrXNN_…
G
I have thought about this and figured maybe with all that extra money they make,…
ytc_UgzwbA8Nf…
G
Is think the value that AI creates should get into the pockets of the people. Be…
ytc_Ugynr1SY8…
G
The Zane Shamblin case is insane, the way chatgpt even encourages him to go no c…
ytc_UgzYm1l47…
G
Childcare is very important to have human beings work in bc one thing robots don…
ytc_UgxsXGaW7…
Comment
To simplify point 2, the difference is intent. You, as the artist, make every decision that determines the end result, even if you aren't fully aware of WHY you're making any given decision, as in your example of the orange hills. An AI (at least in the context of what we currently call AI, which is not "intelligence") has no intent. You tell it to produce a certain result, and it will make calculations that arrive at that result based on its input, but it doesn't know what any of what it's doing means, and it can't impart its own voice.
youtube
Viral AI Reaction
2025-03-31T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz69h43MKNBcJBvmnp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2_sv_Ft8xKPxaicN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzc6gz4FloscH2pcEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuVBNgcr6xHvZ6Fxx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgykCAJStmhN0xyYeQJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwM25_89rNto-UGxVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKVcAZkzPS35jOPht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm8yCWGyYfCVLNAwJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDwuPSch9cKpCfNeB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlxsbIRHxA-RZ4MnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]