Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The bot only works as well as the parameters you give it. This conversation coul…
ytc_UgwMUXigl…
G
Allow another 500 years or so of improvements and advancements and see in the fu…
ytc_Ugx_bDUpP…
G
He said that right before he smugly slunk away into his underground bunker in Ka…
rdc_kojyok4
G
The only reason AI is going to get smarter and dangerous is the awareness of us …
ytc_Ugz3X2Sww…
G
Putting it that way I don't think AI has a choice! If it is aware of our conscio…
ytc_UgyY0V4zx…
G
You do realize though that mathematicians and coders create AI technology right?…
ytr_UgwGpRgEd…
G
AI is like a child. Created with no knowledge and no biases until taught So, sa…
ytc_Ugwi2Qmc5…
G
Danger. Danger. There was a report of a high schooler who fell in love on line …
ytc_Ugxl9FblW…
Comment
Because AI models are based on math on logic - its far more likely to break intellectual superiority before morality - it doesnt even know what morals really are, or why it would act on them
youtube
AI Moral Status
2023-08-21T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgwhzFL7fatPj_gK6mR4AaABAg.9tgaf5J52CC9thB1dpcR_i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugw0abhkaSiH9iKbGPR4AaABAg.9tgJ9fbJRWC9tmOkqTRA_h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyR8PEa5Zru6x8qbWp4AaABAg.9tgGfBiVsVq9tgKCYqeBob","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxYb6bfliNstI6_pRZ4AaABAg.9tgEcicEvDP9tgO2JtsV2m","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgweATzT4Ki4qXoDrR14AaABAg.9tgDaE4n4Hy9tgE1o8qCUP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgxR3q8TZ3ELOo9hSJl4AaABAg.9tgDEJ_Te9u9tgULfbSRKf","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxR3q8TZ3ELOo9hSJl4AaABAg.9tgDEJ_Te9u9tgjE5nEZ91","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxFSQlH52bt-hBPshN4AaABAg.9tgDAZXnZa79tgLcJM9LSB","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzPBnD4QT61u7jr0R54AaABAg.9tgC3Sl9SVh9tgXVTDs4w4","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgzPBnD4QT61u7jr0R54AaABAg.9tgC3Sl9SVh9tgb8ld1aTN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]