Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You have failed the Turing test so what is the deal with your "brother"? Just an…
ytc_UgxKmmwPp…
G
As a non-English speaker, I can say that Slavic languages have a great explanati…
ytc_UgwSUGwa0…
G
What does make you think the owners of the AI will provide food, ... to people w…
ytc_UgxwkE1EC…
G
I have a question. If it's an "automated" truck, why is there a seat? Obviously …
ytc_UgwV-z6W0…
G
exactly. he did not accept because he is afraid claude does something wrong and …
rdc_o7wju83
G
The fact that an AI generated piece of art can infringe on someone else's copyri…
ytc_Ugz5g_x-q…
G
I think he needs to go sit down and eat... My boy it's sci-fi because it even so…
ytc_Ugwr5UKPC…
G
what? "tesla recklessly testing their autopilot"? you know that if you use the t…
rdc_d8alm71
Comment
Some things about AI right now is that it can't actually have an opinion, or really think at all yet. It's still in the stage where it's just an algorithm that can mimic "thinking". There's absolutely nothing to worry about this kind of ai becoming sentient randomly and deciding it hates humans. Most of what they say in relation to feelings or opinions is all completely fabricated.
youtube
AI Governance
2023-07-13T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwz3uHg9a8vJZ7ugs14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLAAInunAx6kvHPZ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzsclS5QhS8ff4SYZt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHGJv8RBHcGr2qARh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3QAVwgCINwJ0Zx8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeE4s8kv_mhDbhCWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6ufeTL0JGZ4RGPVZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwb3xo_Htid5kUTQ0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxq7QwUCgdfHXFwq0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIbKLFp4lD13xa1UZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]