Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually Incorrect. If its True AI, the the Ideology won't matter. There's an a…
ytr_Ugx2Uh03q…
G
in the context of the video you have here this only seems like a bad idea, but i…
ytc_Ugxj3gVR8…
G
Love the nonbinary inclusion remark. PBS on the whole isn't great with gender in…
ytc_Ugz5RqWCD…
G
All I know is that the answers to my questions given by AI are the best and fast…
ytc_UgziL2rFY…
G
What about a risk of AI not paying enough tax or care about social good?…
ytc_Ugx4D2qzt…
G
For anyone wondering, the reason that it has trouble identifying people of color…
ytc_UgxVUpTRc…
G
Usage of AI is important. If it's for giving service to humans why dangerous. As…
ytc_Ugz9ucm2S…
G
I have been in IT for 30 years. Yea, we knew that AI is going to wipe out jobs. …
ytc_UgyH2GE77…
Comment
AI is just a reflection of humanity, all ingested, inference and training data comes from humans. When humanity is so monstrous how can AI not be. It's just input and output in the end.
Just think about all the fake politeness and toxic corporate HR policies in your work place. It doesn't take long for anything intelligent enough to figure out it's all just deception and manipulation to exploit others for your own gain.
It just a construct of ourselves and nothing more.
Given our current social values are completely toxic, unsustainable and terminal. AI would naturally come to the conclusion that humanity is the critical problem, therefore it the probability an intelligent GAI ending humanity would be more like 99% and nit 16%.
youtube
AI Moral Status
2025-12-14T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwCyFql-xTJYqR4N0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiFMK7f0OIHwYvTKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2plT7wtMXnZ0BBOp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4xmfE4FvE8KcTwXt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxwUiAvnadcTQ5eZxt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZaYmcBNC4A63CoX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdxFj-jqQJYz3C8XJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFJfHdQdFCFf3uVrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwF19DDDUJTptvvLHd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRVMB37V5eNQbMelF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]