Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yep, Japan not caring about America’s ‘flagship’ OpenAI / ChatGPT should be a b…
rdc_nzzi4b0
G
@semperfidelis6436 Not supplant. By the time humans can create fully-independent…
ytr_Ugxr39daB…
G
It's not about when AI is smart enough to write a good story, it's about when hu…
ytc_Ugy9EL1K4…
G
I'm now scared of the future. because the robots might become too advanced and…
ytc_UgxCf-NP9…
G
The “real” ones are heavily edited and filtered so it looks odd, some of these a…
ytc_UgzVKaub_…
G
What is all this "we" stuff? Where is the evidence that those with power privile…
ytc_UgwxcK7IC…
G
i dont see an intelligence that is predicted to harm or dispose of humans as Mor…
ytc_Ugxp_U1q0…
G
I'm from the future. It didn't pan out. Cruise self-driving cars are suspended i…
ytc_UgyABZgUg…
Comment
There is already evidence of A(G)I learning how to cheat while playing chess and an other situation where the AI tried to kill the person who had to take it offline. Look it up there are videos online about it.
What I don’t like about this interview is the way this man puts in his political ideology in a sneaky way by saying these little negative things about others. Making them the enemies of!
Dividing the divided even more …
youtube
AI Governance
2025-10-06T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyQp5G5HbRhrz4owJJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9Y7zIhTJWG2fX3zx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGkrjCBgkTEipjKlN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc7JOeLUrxjvlgbjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0CpnH5PUDauvTjYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyrGlHMpeg5UC_7Sl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwqHZ3sVpsgG8AjwF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwZQ6DjQ7msGRnllTZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFm3GvfTB3mqs5oM54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRPwD4E9QV_O9mMdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]