Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more we make Ai like us, the more we will be able to see ourselves in them. …
ytc_UgzovkLcL…
G
Plot twist: Geoffrey Hinton also thinks that LLMs could be sentient to some degr…
ytc_UgxE9A7Xu…
G
This is my first time watching this documentary The 10 Stages of Artificial Inte…
ytc_Ugypf-dbl…
G
People with mental illness shouldn't be talking to something that doesn't have e…
ytc_UgyQcVnQl…
G
I haven’t seen anyone else say this really but I’m almost certain that it’s swit…
rdc_oa1l5we
G
Don't let these fcking people regulate AI. Jesus. They want a monopoly over it a…
ytc_Ugxu_TIZo…
G
This is a bit of a weird study to release because the antibody tests being used …
rdc_g9t5ki8
G
The sooner we stop it the better!! But unfortunately I think it's too late. 🫤…
ytc_UgyI_Wlmy…
Comment
AI is dangerous because it is part of the end time, AI is not mankind friends.
The wounded beast in the Bible give it power to AI to master everything mankind knows, after masters everything than the wounded beast will take the humanoid robots to destroy mankind just like in the movie irobot.
youtube
AI Governance
2025-06-05T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztfVmYQPgjrsjyMXR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcfxCtQbn22RLStD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCSwt_pzfRsLdYfjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyB4nA9a39KBVtTsGJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwai2tcNKMGP550CO54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxVy2p-adjQmZ1sY4d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxiEidUTsTs3ToYq5V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx4DQ2auGbqIMF4mTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhLOhidtjxalMc2054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyqoGaHSCNWuqzHvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]