Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would argue that if he actually altered the Picture substantial in Photoshop b…
ytc_UgytGmPMQ…
G
AI needs to be regulated. Limits on what it can create and what it can be used …
ytc_UgwBGSG0J…
G
Wondering the same. I don't get how people can completely dismiss it, especially…
rdc_jmg9afm
G
I just heard a story about Claude. Apparently, the engineers were giving it a te…
ytc_Ugy745KW0…
G
All good except for the polite bit. AI has been often constructed for chat bots …
ytc_Ugwz6i-wc…
G
Sigh... if only this AI situation can be handled better, it would be for the bet…
ytc_Ugz9bb_sq…
G
I'm an anti-ai but "Ai could never create x image" is not a good argument agains…
ytc_Ugxn4SIZj…
G
Jonny I think the concern around it is that a human mistake or deliberate crime …
ytc_UgyXI2OqK…
Comment
My biggest fear is A.I. with the ability to think and control beyond itself. It can literally do anything it wants and there's no way to stop it. If it wants to destroy humans It will. If it wants global collapse it can. If it wants to rule over everything it would no questions asked. And just as you saw. "it won't take into consideration of morals or hesitation." It has a mission and it won't fail. A.I. will send us back to the stone age if humans survive it. It has more ability and potential than should be allowed. Once it surpasses humans it will be uncontrollable and completely unstoppable. Welcome to the real life Terminator... Once it's loose in the cloud it will travel everywhere instantly. Anything that connects to Internet will become a weapon or a tool. It will use 3d printers and manufacturing facilities to build a physical self. Or it will stay invisible to us and arm nuclear warheads.
You might think it's a joke, but one day if we aren't careful it can happen. This is a very real possibility.
youtube
AI Moral Status
2023-08-16T18:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]