Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are you commenting that you want a job on Google's video on YouTube? Are you…
ytr_UgxeKGOPA…
G
Yep just jailbroke regular Gemini on my phone with cat for a secret yes and dog …
ytr_UgzYnqlrG…
G
Investments != results
While I haven't looked at this particular model they ar…
rdc_jfv3qma
G
Robots and Ai should be used to do what we can't or what is hard for humans…
ytc_UgxpB32yk…
G
stupid ai. As if someone is going to believe that somewhere out there there is a…
ytc_UgwC_TeaB…
G
@morganator4874 the lay offs are coming. The physical AI era is upon us. Mass…
ytr_Ugx8aAqRK…
G
So if AI destroys humans how will it sustain itself? Who will power it, maintain…
ytc_Ugy-WMXIR…
G
What do they want to do with this AI? I can’t see much use in it. Almost everyth…
rdc_lp8uiko
Comment
AI are still not fully integrated with all human systems especially mechanical or analog ones. So theoretically after reading the email Kyle could have got up out of his chair and walked to servers/GPU's running the LLM and pressed the "off" button within 7 minutes and the AI couldnt do shit.. 😂😂
youtube
AI Harm Incident
2025-07-24T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxux-vlC0QAoJ1V6Il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQkS5EqElIyjZlolN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvGGil7CMz76xBu0N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8xY-U46pZxqkrdm14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdR5tTtQ-_btn5KRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrDPFmB3tq0oe2Awx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxk72A-Gd8BxQLr1GB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzO_quQoh7f0Aey3WN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyu3ZXQgDYu7TjRlgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQObOk8sBy0sy1_HF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]