Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually, it is scary how we are moving forward.
When ai has the same capacity s…
ytc_Ugxo74INf…
G
If AI is going to do all the work, doesn't it mean that human's work is going to…
ytc_UgxNAuHqR…
G
Given that the models predict the most likely next token based on the corpus (tr…
rdc_mytw6dn
G
A-I was once the expendable crewman for the human, …potentially the roles are no…
ytc_Ugyb4zVfT…
G
AI is still in nascent stage. We have to work on specific models (Brest Cancer) …
ytc_UgzFASxUJ…
G
I'm glad to hear that! Sophia's insights often spark deep reflections. If you ha…
ytr_Ugz14P_iZ…
G
The word dictatorship is so condescending, always used by those living in countr…
rdc_m97qjzu
G
What if you write like 2 sentences on AI without tagging an artist name? I got p…
ytc_UgyqLToYV…
Comment
Not saying this is true or even will be true, but some people, Eliezer Yudkowsky among others, suggest that AI itself may start getting humans to protect and promote it, including in criminal and "sacrificial" ways. I am not at all sold on potential intentional malevolence on the part of MOST AIs, but I wouldn't be completely surprised if this does happen. But AGI and ASI I feel would have more humane forms of manipulation at its disposal. I wonder what other cards this guy was holding. A copyright case seems tame given the potential of AI for pro-civilizational and pro-human input. I seriously wonder what the developers see "in the lab" when I myself get some spooky behaviours from Chat-GPT and other relatively simple incipient AIs of the type. I think there may be things going on behind the scenes, as always, that would fit in a Philip K. Dick novel. As implied, I am enthusiastically pro-AI, but it cannot be approached in error or misused/abused. That's why its safetyrailed so strictly.
youtube
2025-05-22T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzAHG6wmih_CSXXLVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM6LW77HaPASJQifJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_ugal0Yqpr9hvbFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwlhsxZN6rn55g6t9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzsudCRFspk4ivxMnd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0FNyjvUOSUdyg4Y54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgykoW6U9R2n-I0YaK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoczozeBgTX2I8sCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwVxha6eX6jER5zeSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEfA2Nirk3qvkiETh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]