Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@1flash3571All Ai bots technically a bias but they are designed to minimize it …
ytr_Ugw2qG_qJ…
G
I most certainly hope so. This "globally recognised AI expert", who - remember t…
ytc_UgxYD3lM6…
G
Omg I hate when people say tesla autopilot almost crashed them when they were cl…
ytc_UgwvDz37M…
G
AI stands for artificial intellect or artificial intelligence. And that's the pa…
ytc_UgwommsOa…
G
WTH is this? where is the terminator 3000 model I ordered? I don’t want this I R…
ytc_UgzrVFYP0…
G
I saw the caption I had to speak up. No children of god would fear this. AI IS G…
ytr_UgwmYkUbm…
G
Bro, I'm already tired of people trying to "cancel" people who make AI drawings,…
ytc_Ugysrhz1e…
G
and if ai understand the reality , we can't , i dont'know what will happen , ma…
ytr_Ugw0FV92t…
Comment
Guest says we can't unplug AI, but it's hard to believe that our military or some malign foreign state or criminal actors couldn't destroy dozens of data centers with missiles, bombs or drones (-maybe for ransom -), effectively 'unplugging' our AI masters.
youtube
AI Governance
2025-09-07T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJXuuz0ztFVESToHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMkAR_6iiRde0dafB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz219IR8buATg-TOMN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpZ-wUFlVBxH724RJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0G-jUP5OPnctHWUp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwFDNFF3HC5xxHnw9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6SAoCU5EnnJXPbbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNGGJUIb2ABPnoNSh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgznlOnMI4F6_Hb5QeR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy92OLbQTAbjypnI6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}
]