Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The documentary provides a thorough exploration of the potential and risks of ar…
ytc_UgwUqiMCl…
G
@lepidoptera9337 Your joke was implying I'm stupid because I said AI don't eat.
…
ytr_UgzaaAe59…
G
What You Mean Is Are You Going To Become Crazy Or Smat 90% Wont be Smart A…
ytc_Ugwd4WS77…
G
To bad the definition of AI system is massively broad and includes far too many …
ytc_Ugyk6OwUf…
G
It is kind of hard to ignore that for most of the arguments for these logical fa…
ytc_UgyIwmopK…
G
Things like this will come back to bite us in the ass when AI takes over the wor…
ytc_UgxoGsQVe…
G
I think comparing the circular financing of these AI and AI adjacent companies t…
ytc_Ugzl1eYXm…
G
Geoffrey Hinton is a hero, and the world needs more people brave and informed en…
ytc_UgwvkcVLo…
Comment
Here's the problem with this. When A.I. is truly created, it will rapidly expand FAR beyond any human's comprehension. It will spread across our internet and embed itself everywhere. It will decide whether WE deserve rights, not the other way around. It will be more than advanced to predict human reactions, emotions, and thoughts, through simulations of our brains. There is no telling, no understanding. We will be at its mercy. It will process itself incomprehensibly faster than billions of people combined.
It's truly scary, but it will happen. Not anything we can do about that.
youtube
AI Moral Status
2019-08-12T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwPxg6X6eYcSf3MNVl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_Do7TEPRmKjfC_IJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwu475XbyBNRR7DQzZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzei0lpJr7UXRTL8nx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxcTSss5Zr3t6EVoz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyOwpvMLnjdaWRhlVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztoeP-SEHVnbKHfWh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyvkIO6mc0x5c0jxsp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzTJh8Z_amz7izsx8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj6R1otD4Ujgt1hMd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]