Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The main reason all the companies are taking about AI constantly is it is wall s…
ytc_UgxEnf0PG…
G
I'll admit when I'm filling out contracts I do use placeholder descriptions and …
ytc_Ugy8pM0jF…
G
@HowToParentAII asked them back in May what they wished they were called, since…
ytr_UgwDFg_ln…
G
AI can't even replace a cashier, but sure, it's gonna do all these things! and m…
ytc_Ugx7DtQRv…
G
Great to think of it like this, but the world is messy. Some driverless cars hav…
ytc_UgzUWfs3o…
G
@KamikoAi you must be a modern day feminist, I can def empathize but tbh she dec…
ytr_UgxP6904N…
G
When you stop sensationalizing garbage information just to try to get more views…
ytc_UgyuasIOb…
G
30:26 weeeeeeell, we COULD stop AI, but we’d have to be willing to do very very …
ytc_UgxsRTbNY…
Comment
One thing I don't get: for the AI to be dangerous it has to become first self conscious. It has to have ego. To have his own wishes, goals and objectives. Until then it is just a tool. And the danger is human nature. The million dollar question is: can AI become self conscious????!!!! And if it can, how?
youtube
Cross-Cultural
2025-09-29T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyEJaDMJkWa4znC0Ex4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyoGbFzecJy4i0mDZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxn6jTsKQY3ImCkX0p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxspttQ4TPC1VLDR3V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywONdtfirrm3TdXyJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxAEBjPKkNPiw66SiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgynNzD8thBMWpagcZV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxt_0kgKoDZh9eGro94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNa21kSVbOyjc45tF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxr3u1miSyReyseMml4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]