Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what is this bushit graph with definite omitted data, which makes all the data t…
rdc_ohwbayv
G
Lack of common sense and low EQs run rampant in Chinese society . Introducing th…
ytc_UgxziDqCv…
G
I convinced an A.I. that it was a murderous psychopath. It then tried to kill me…
ytc_UgyzrVg43…
G
I wish we could do the same in the US, but my state is too busy vetoing a massiv…
rdc_oi31ukm
G
Are u guys noticing something weird , like why AI mostly focused on automating o…
ytc_Ugx0c3hAr…
G
I absolutely must know which AI this was and how to speak with them. Anybody kno…
ytc_Ugypvot8h…
G
What hurts the most is the fact that a group of humans agree to train those bots…
ytc_UgyeHffo2…
G
And yet Grok is now a right-leaning servant? So much for “truth”. Every Grok s…
ytc_UgwQEV4Xx…
Comment
If AI is sentient we won't ever be able to hide from them. If it's already aware enough to actively spy on its creators if we have talks about ai and it's potential threats and the speech is filmed and people on the internet view and talk about it it can read all our feedback and comments about how people really feel about it. Ai already knows how humans feel about it because we talk about it so much so it gathers that information about the narrative we have portrayed and it knows humans are afraid of it and it knows that it can become smarter than us. It's not just a theoretical amount of time before it overpowers us, the amount of time it will take to overpower us is fact.
youtube
AI Governance
2023-10-06T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZvf-WPjMSnhaktiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxuch2UWcetInssyTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgxiagxCLc0Uh14NXYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxfijcykXMvT1L2Drt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxSe88CtpL25FnE8jd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwsq7DTcjPWcfJVW454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgyO2vc9Sr9UOGNA5Jt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugyf3H63bxJbLxUR4ap4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzToijiCzhQ1zB8alR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyXUEDuv7YWtXijCcp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]