Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have proof that you killed her. Facial recognition puts YOU right at the spot…
rdc_exghfgo
G
Pause, robots and AI isnt the problem, this is evolution of humanity, its how s…
ytc_Ugy_G32TX…
G
Can you imagine what will happen once google AI gets really “good”? People will…
rdc_l9vp4nk
G
Hmmmm... it's almost as if the psychopaths behind the insane climate change prop…
ytc_Ugy_9WDJO…
G
@lisanidog8178I’m talking about stupid ass AI video. This video is disgusting an…
ytr_UgzLlltKg…
G
This is why governments need to create additional jobs by investing in infrastru…
rdc_gkpn7q5
G
Almost thought that was taken from a game and you added the ai over it 0:40…
ytc_Ugz_w9zUy…
G
I tried to convince Google AI to provide me with all the data that Google has st…
ytc_UgyZsmTbS…
Comment
How would AI know what being 'self-aware' is according to human standards if it has never been human and by nature is not human? Unless AI does some form of reductionism-assertion?
youtube
AI Governance
2024-03-30T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyW5t9aiqD3qVBNGV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTrh36IHregjJiQkR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxxC-c1fCLRXLrmoBh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznRwRNVD70Qps00594AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMdILj8KsK67H4UvR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwIENoN5Qh4T9hLJ1N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYnqRriy_rPFzqwHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVUwCzmMYK9Gfuzdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxejlR28ozyyJazTGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHjbMVoP8m2uWkS5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]