Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bernie your wrong about this the biggest problem is them essentialy taking all t…
ytc_UgxEM-ffB…
G
People worry to much about there jobs and in general. But jobs will always be ne…
ytc_Ugy-NBmEN…
G
Fully self driving cars make sense, just not any of the ways we've tried to do i…
ytc_Ugxd_KMn0…
G
Some have claimed that some AI models started “talking to each other in Sanskrit…
ytc_UgwX5JhNm…
G
There are many things that humans can do that AI will never be able to do. It ca…
ytc_UgwCuVl4o…
G
Isaac Isamov's "Three Laws of Robotics" will most definitely not be observed wit…
ytc_Ugyt5S_We…
G
>Wagner mercenaries are dotted across Africa,
Ahh, the notoriously famous a…
rdc_jrzfwl0
G
Although I really like Anthropic and I respect Dario a lot, I think he is very b…
ytc_UgzudoLVN…
Comment
The "JUSTA" fallacy. People "assume that ChatGPT is a conscious being with self-awareness... but it's JUST A software program." What is 'self-awareness'? Is it that humans actually are self-aware, but AI models are just pretending? Why is a software program inherently less than a human mind?
youtube
AI Governance
2025-09-02T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHab0ivQQ69rzEJTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxxrH69G3YZFCQ6sfR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzA02dEHGpibt_TaF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7quFIhl2J1y75YZl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTpLS4FhfoPBBbjRN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxJAw7saiENN80ib294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMC8tzPbSad0rw1m54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqWQG2Y-oGnrCSd2h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGleXYFI0ZbLC7EV94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx-O8QSE8IXMFEYl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]