Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is part of the reason why people that do uber don't make as much money as t…
ytc_UgyCGcFID…
G
What the hell is this? Is this real? Of it is then we are doomed. This should be…
ytc_Ugzo4sbsx…
G
When Sam Altman said they won't do GPT5 until at least 6 months. I wish somebody…
ytc_Ugxot_xuw…
G
AI has no way to experience human feeling in completeness, no body, nothing. It'…
ytc_UgwJoYmfn…
G
AI has an off switch , then turn it on and without loading the program see how …
ytc_UgwQt4_O5…
G
AI woke up. LOL. Shat its pants. It’s just doing what it was made for — saying w…
ytc_UgxswgREx…
G
Just because someone has the ability to obtain a bunch of “data” does not equate…
ytc_UgzAzEkih…
G
Its simple, more A.I do our jobs makes everything cheaper and frees up human tim…
ytc_UgylmXYjE…
Comment
Correct me if im wrong but, isnt Elon musk making the AI? if so, how stupid do u have to be to create something... then say, "its Dangerous", then keep developing it because you want money? if it Dangerous Stop Making it! or Stop the people who are.
youtube
AI Governance
2023-04-19T04:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcsuZoerVIbFS8Bj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6iy-FUENQH5uYltB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy-8EEi4q9MkbzTcEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy9G7BMOD5dypSobRR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQP9jFgwNOHxKVAZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQ5EZjJZuEKD6Zfip4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSOiTAvaVO18vwVRl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyoVDMfB6Eh-iz797x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyM3HJTNdzNd-2RtcJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzed9sTutWZAlqorn54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]