Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sam Altman has no deep technical background in AI. He dropped out of Stanford af…
ytc_UgyOcCvmu…
G
@KidGoku-g4v Please explain how an artist with the same skill as anorher artist …
ytr_Ugx4RqLH7…
G
Yeah, except your AI sucks and has no chance to displace the vast majority of hu…
ytc_Ugym-bNV9…
G
automation of driving jobs and other jobs is 100% possible I work for a City Tra…
ytc_UgiujEMCZ…
G
I had AI create an estimate to model different scenarios showing when 50% job lo…
ytc_Ugx-E_Uyb…
G
If anything, this whole debacle is making me feel way less of a talentless hack …
ytc_UgykefxXM…
G
I asked ChatGPT if it could create it’s own GNU/Linux distro from the existing o…
ytc_UgySwG54g…
G
It seems to me everything has been programmed into the robots everything that’s …
ytc_Ugw00UNZY…
Comment
AI development does concern me however i'm not sure that humans would ever be forcefully controlled or ruled by these robots, this is because they should be smarter than humans. Over time humans have realized that violence is not good or effective so i believe that if AI did surpass human intelligence they wouldn't be killing humans like in the terminator movies but instead helping humans to become more intelligent and create peace through education.
youtube
2018-06-19T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxuXsUwJiJSoEOk1FR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzxo3cOJyT6r8Jwd1x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrFyFd5o_BxlyXdtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0tAI2eeRfeV-RKAl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx936JxhOTtUou0OyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCP7px-NKjfebehsl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza8Oxhy0Bx4TEqW_d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzchn6q-aLEWol8Mr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyiCA71F2FS2oS2FUB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwz0UNMHEZkRRbVnBZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}
]