Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish chatgpt would have said "Is this an add transition?" after the question a…
ytc_UgxjeN38N…
G
I knew it was AI right away.
Too airbrushed and the eye is fucked up…
ytc_UgwrU3Ago…
G
I have fibromyalgia and use a wheelchair. I am a disabled artist, and I just gra…
ytc_UgwbhjKWA…
G
Watch CBS morning interview on utube...with Nobel laureate Geoffrey Hinton, ofte…
ytc_Ugz1hLnyg…
G
This is the best comment I’ve seen on here. PRISM came out in 2007, 16 years ago…
rdc_jevw6ud
G
CAN YOU FUCK IT????
Engineer: Excuse me?
CAN YOU FUCK IT?
Engineer: Noo.
CONSUM…
ytc_UgzMKFzOo…
G
No, robots do not violate human rights.
Having said that, their human programm…
rdc_cqiewbj
G
It's idiotic to imply intent from an LLM. The training data comes from human to …
ytc_UgwTnYg_D…
Comment
Well, I don't want to be riding in a cab with no driver, because I know that my computer every once in a while has glitches and the same thing can happen to a car, I also don't want to get a lecture from a computer I want to have a human lecturer, I don't want my doctor to be a computer I want a human doctor so it's also about the choices that humanity is going to make or not make not just about the things that ai can potentially replace.
youtube
AI Governance
2025-09-04T12:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_t8IXKiRnVGgT5xp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsLSW9Z2UdAE0Uaux4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8_Uanke3rr5DwAyJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyj_jKcV5pE7YxP8tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwOIDHDuExV6A9Zdxx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7F-VkrM8Y3_Ajnt54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz74H3-lzEPDuV1evV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxbv_mLaaTWzc5AZpd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcaL5raiDn6bLlzBJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNGotJ_sJbOYxd0oV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]