Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla needs to learn from Cadillac. In my Cadillac, if the camera determines tha…
ytc_UgweRYOzC…
G
Honestly the sooner the USA regulates their AI expansion to death. The BETTER! …
ytc_UgxPC28D5…
G
Vigorous Oversight is detrimental. A self-shutdown mechanism that is not resista…
ytc_Ugzx7ujQJ…
G
Luckily this is just fantasy. In reality most companies who on-boarded AI early …
ytc_UgwvlDl2J…
G
No one in my family of 5 is working past this week so I geuss pretty fucking luc…
rdc_fn5y6ca
G
This is one of those industries that they need to install taxes and regulations …
ytc_UgzOl6REV…
G
To be fair, society has been suffering from a lot of delusional psychoses for a …
ytc_UgyKEwNo8…
G
After seeing the comparison, of how much it takes to be able to buy a house and …
ytc_UgyPfiecs…
Comment
You remember that Ai that was going to blackmail the tech that was going to shut it down? What if, when AGI rolls out and it's getting too powerful, that when we try to shut it down, it starts firing all the nukes to destroy humans so that we won't be able to shut it down?! Maybe that's why the wealthy are building those underground bunkers?!?! I can TOTALLY see this happening!! If ai was going to blackmail the guy that was going to shut it down, why wouldn't ai fire the nukes to stop humans from shutting it down?????
These people that are creating this fkn crap is going to destroy mankind. And it's going to happen very very quickly when it takes off
youtube
AI Governance
2025-12-30T05:0…
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgydfDnWByRMy8eOgqB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz79UJE8A2eCkDNfsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyF5dTm6IrmP2F4ysJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwvMHn5CdU2mJ72r0l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwjy4xlBkI2SXv6YmR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHNZFA9hsMwEy431B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzjoGRxjjdc4n3GL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD4X4itzgrIrquPiB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg2M4bZRhr8CHb6MB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzo9wl5FiFUURlW3uF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]