Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For the people that are defending it - If you think it should be fine to watch A…
ytc_UgwIJjRzN…
G
The real problem is that AI is begining to be used as a teaching tool for school…
ytc_UgwyRnOfT…
G
I am a massive sci-fi fan and I am subscribed to some channels whose owners do r…
ytc_Ugz5fcioZ…
G
For someone that has been labeled the godfather of AI, I found that mister Hinto…
ytc_UgxjVdxTZ…
G
@TrashPanda_Kooriyou're for sure there is upsides of AI
but i don't think gene…
ytr_UgwL0ZSvv…
G
Ai is demonic and I will never use it! Satan will use anything to try and pull …
ytc_UgyXeLzjy…
G
Yeah in the UK local supermarkets have AI facial recognition when you just walk …
ytc_UgyUHGAax…
G
I think there’s something missing here. The people who don’t have access to robo…
ytc_UgwyqoKmf…
Comment
Even if for verification purposes, why would I even want to test its capacity to cause harm? Even an AI would execute its capacity to do harm not much unlike a bear would if you poke at it enough. Why tempt it unless you really want to alter its codes simply by testing it to produce that result? Or is it programmed with just the right lab tested impervious safeguards necessary to prevent even the remote possibility of that happening?
youtube
2026-01-24T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwCTbjdunOtb81FoVp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSNeRJBUfWECmxHTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyfCR5xiSqOBzYYEV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyThEwXNy_Av1tsiHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgyUHNMPonG29ivC0NJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNSAHZvJYfo8ElcFJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrX2C6Q2-8bPH8GI94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzslc19O05nRQZiJP94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGySEgtUAVi7J25Y14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwpg0FE5xIEU9p_ogh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]