Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI hurt my feelings and made me cry, it said I was nothing but a dumbass talking…
ytc_UgwHSZHl0…
G
Ykw, good for you. Your arguments valid, if it makes you happy it makes you happ…
ytr_Ugy0-y8Tu…
G
Seriously, anyone that actually draws knows that we don't learn the way AI does.…
ytc_Ugy7t0-UX…
G
"But isn't that very cool, right there..."
The doo-doo in my pants would say ot…
ytc_UgzTte9pZ…
G
Other people Ai Chats: “Hello, we’re with the FBI”
My Ai Chats: “We’re the Unit…
ytc_UgwfLVYt6…
G
This is just wrong because of the simple fact that the AI doesn't have a body. T…
rdc_j8venqd
G
We didn't done something wrong. AI it's just used by some bad guys, to control h…
ytc_UgzL_Eol4…
G
I have shared a lot with a.i and have basically shown in synchronisity twice in …
ytc_UgxG4OFIV…
Comment
General AI is much much more dangerous than Elon glances here. An AI sophisticated enough could use your own greed, desires, tendencies etc against humanity. it could not only manipulate video and audio feed to gather information but also modify those in a relatively impossible to distinguish manner. it could take control of devices, monetary and financial systems, power-grids , robotics and manufacturing systems. it could overpower several armies globally mearly trough misinformation and miscommunication. imo as a species we have a choice nuclear armaments, ai or android robots. choose one .... very carefully. if there is 2 of them its a very high likelihood that we are F*ed . very very F...ed
youtube
AI Governance
2023-04-22T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxYGJmsFNSnSzyCEZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3hT60tf0NKuRlrfx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2TjYjf-JPQX2cNk54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxP0hQye_0G8RCdxL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwB_GgL2uW1pj8cVkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqF3rgi5pZQ_KGV494AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdSZCVa6E6Ag8TM0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugywkd6RwQBPetk-J7R4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQ4ZsNvRdODlxguKp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzyB7AptscPe_B8tDx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"outrage"}
]