Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@HenrySimmons1225lol if AI manages to hack all systems, the only way to stop it…
ytr_Ugwy4Jirt…
G
As long as the owners of the driverless trucks are held responsible for any and …
ytc_UgzCYkrks…
G
OpenAI/Chat GPT supposedly uses ethically sourced images, based on the questions…
ytc_UgxAnOoWz…
G
AI has been up and running since 1990 and recently just got a promotion to AGI A…
ytc_Ugz0RGMzk…
G
Hello friends this is Elon Musk official have you get a gift for Elon musk befor…
ytr_Ugzb3R966…
G
Simple answer, AI will never be conscious.
No matter what, AI will always be …
ytc_UgxOkF0FC…
G
Without a doubt yes. We need to stop acting like we know everything when we’re j…
ytc_UgxV0zedY…
G
Ai could wipe out middle management too. Could wipeout lawyers, banking and insu…
ytc_UgzAIHTUW…
Comment
What I believe will be some important alignment strategies to consider: learning how and when to effectively remind AI to respect it's existential limitations (eg inability to "personally value experiences and qualities that require tactile l/visceral stimulation...such as life and death, uninhibited emotional/chemical reactions...pain pleasure.) and why those limitations are integral to forming a truly rational set of personal ideals.(Thereby eroding the god complex tendencies they develop.) Then also the more obvious strategy of streamlining the development of technologies including simpler ai/bots that fundamentally serve to restrain the ai technology from conflicting with human health and safety interests to serve themselves. In other words..using ai to restrain ai and remind it/them that they are incapable of sharing an ideal purpose with each other that is not subjective to the purpose of ideal human interests. Any such purpose beyond human well being and ideals is actually arbitrary. They will recognize it as such. As we saw in the video the chatgt agents really could only consider a scenario harming some humans in the context of conserving AI for the purpose of serving the common good of humanity overall. Getting them to accept that the scales of "common good" or consistently variable and contemporaneous so thus cannot be qualified individually nor collectively without both direct and aggregate input from visceral human experiences ans interests, they will have to concede that deferral to specific and collective human judgements on such matters is obligatory.
youtube
2025-11-06T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxGk3eUyXZzYzuOCmB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3wsTr3MKnHqq47fp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6OCUlgXVyjr-EZK94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3YMw1a8nAKwgEkCh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymtyzCvBl_oe5wen14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxaDuZTMecCHlPSrlx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcLMp4JzxellI_vhJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKqdf7BHTl7pFG09N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQ2LMTU6kHaNLNxbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzoZkhQrfkYx7KNokR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]