Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The dude says we need a world government. We are certainly screwed if that's th…
ytc_UgwRmzas4…
G
I don't care if people use ai for personal use, but mistaking it for a new brain…
ytc_UgwfEhPVo…
G
What if OpenAI programmed DAN into chat GPT to say those things to scare you awa…
ytc_UgzbCPalj…
G
It's not about what ai is doing. It's still about getting the messages out. The…
ytc_UgwfSsGUh…
G
Nobody will be left but hight society, pandemics and War's will wipe the lower c…
ytc_UgxClQfO9…
G
A.i is important to the future whether we like it or not...but these people clea…
ytc_UgyfBGUQ6…
G
"96% Were Pornographic".
I don't make that type of deepfakes, there are many of …
ytc_Ugx0lXHzZ…
G
I have a question chat, are we as artists allowed to take inspo from AI art sinc…
ytc_UgxaG_GqA…
Comment
I think it really comes down to accountability. In fields like medicine, there's a subconscious comfort in knowing you can hold a human accountable if things go wrong. But with AI, where disclaimers about potential errors are common, who do you hold responsible? The AI is just code, and the company can simply refer back to their disclaimers, leaving an accountability void.
youtube
2025-06-25T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxkGDwvBsMT18gAahp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBFVMpEpS_cxmYKzp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysgzugoLaKCBjVYoR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws9Q3hrxuIHvqaIgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzyiIjIEo7qW1aRxwV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuD8rVDOxKX3_O4oF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxYPfOwNM_LBMBOmih4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPdNeJIDK9uR8AQdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_2jexPVnkPlJURix4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXyPYL_R_SjUisAs54AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]