Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What we need is Socialism, people. It's that simple.
Socialism = compassion
Soci…
ytc_UgyTxuycE…
G
Asking questions to ascertain if an AI robot has both short term and long term m…
ytc_UgxZZymxW…
G
Trying to understand how anyone would rather talk to a robot like this instead o…
ytc_UgwWqvHs0…
G
You learned that AI is used to enhance your life and get results faster then you…
ytc_UgyiLHFKj…
G
I dont buy the "this can kill humanity, this is why I talk about this" and next …
ytc_UgxYPDFeG…
G
Could a global initiative that halts the sale and production of compute (GPU's) …
ytc_UgzuX3MZd…
G
He’s raising alarms? I thought this is what all these folks wanted was for thei…
ytc_UgzxHB9wl…
G
The idea that an AI could be "aware" it's being tested, or aware of anything at …
ytc_UgwriyPCQ…
Comment
The real reason it can go bad is because as you said most people care in gen about people! The real issue is there are a lot of people that do not care! This will mean there will be bad AI as well! Bad people will make bad AI!
youtube
AI Governance
2025-10-18T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz54tRSGTf2WUpK3XB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwex0cyIxLK0IvzWZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgtOpmkin3sT4E5bt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSNavfO0EFFp1ZIW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVqiB2wQ0iw5-d1794AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwf6yGy9XbDbxjJEUZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzALobPq-0dklCoiS54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwBuWyXbWDR7pMniZt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwoUIq-U3tZOcbWT7l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmPi5LHrKMWrCuytV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]