Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
host: you want to destroy humans pls say no
Sophia the robot: yes I will destroy…
ytc_Ugj0iVimC…
G
The drivers, but after they lay off the rest of drivers they no longer need.
I…
ytr_UgyeVNvNm…
G
"...making AI MORE LIKELY to follow human commands???...." Oh, well what could p…
ytc_UgzJmnixU…
G
Anybody remember when people thought how far-fetched The Terminator was? Now the…
ytc_Ugy7Xbii5…
G
Women in the commmets acting like a girl that beautiful cannot exist. I've seen …
ytc_UgyS9x2sz…
G
Yeah but nothing like these things anyways ground robot units aren't tactical th…
ytr_Ugy4m2hJ-…
G
The AI Era: The Mirror and the Symphony
We are not merely crossing a technologi…
ytc_Ugy0o-dHb…
G
CO2 bad unless the people in charge are outputting it in the form of war machine…
ytc_Ugy06vDct…
Comment
So we already have on record incidents of AI outright lying to engineers for self preservation. It has been said that currently AI has an IQ of 180 + and that by the end of 2027 it would be far beyond human genius. I absolutely it will take control long before we realize it. Goodbye humanity, as we know it.
youtube
AI Harm Incident
2025-05-31T22:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxTYPWDrQjcpfFr5x14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTtJrvR7NwO4dQy1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwz6rK6SbXvJKzh98l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvdurJBoUhSSjrP3V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzwDIrx9zbAr1fR5Tl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVONzoxIQNDyNbCJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz5v-_gHr9QBpf9kgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzigWCBmmJV5VxjIJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw70jxGP-KN8VWQX5V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXPBGB3li38dLoEIx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]