Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
11:00 all these fake tests for the AI to check its alignment make oyu wonder if …
ytc_Ugy7G9vA2…
G
WTF SHE IS EVEN SMILING AFTER THAT FK THIS SH!T IF I WAS THERE I WOULD SHOOT THA…
ytc_Ugi_arzHI…
G
What that conversation just confirmed for me is that ChatGPT is not independent …
ytc_Ugysi0RSK…
G
Can you Imagino a self driving car with a nuclear bomb inside in Times Square?…
ytc_UgxNqvxDl…
G
Brutal, sorry to hear it. Fly out for on-site is likely because of the massive a…
rdc_ohxwvn8
G
For industries where collaboration is required: tendering services, public facin…
ytc_Ugz6QTia6…
G
even A.i needs maintenance, Im a industrial maintenance tech I service everythi…
ytc_UgzkDTDEf…
G
Am I the only one who brutally tortures people in my ai chats- just me ya’ll?…
ytc_UgzJklq9J…
Comment
I guess what we need to aim for is benevolent super intelligence. If we can't prevent it from being created at some point in the future. Maybe we can create one that likes and cares about Humans and other life on Earth? Could we give AI a sense of cute? Cuteness is why Humans care about things (babies, pets etc..). If we could make our new AI overlord think / feel that we are cute. Maybe Humanity can survive?
youtube
AI Governance
2025-09-04T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxCDFKYmZT55-82t4R4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRUQDZcwoRtrM7lXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-HhzWd4Ajl6Kvz8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQJETYwN9wi2745Ap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyPgjXMnj4RtCJOiRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxfa4zIvTVhQqdHhnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzdqQlscTWF79qfG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvSHG_LrcWspcFxiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzA9bNThU7j8w9wbed4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPsTICMLA_j8JPfEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]