Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I actually agree with you here, AI, had it been used to be a tool to help you cr…
ytr_Ugy06jRfN…
G
I've gone through AI prompt training, and there are cases where 'forceful langua…
ytc_UgxnvLHPu…
G
I also wanna object to "consuming art." With the except of food art, art is not …
ytc_UgwYO3Bdn…
G
AI increasing productivity is probably going to be less 'ChatGPT baked me a cake…
ytc_UgwS-F5il…
G
this is not AI it is just copying what is already out on the internet. the compu…
ytc_Ugwlb1ZMN…
G
This discussion having been recorded before the reveal of the power of Claude My…
ytc_UgwK49AIu…
G
There not just replacing jobs it’s rewriting society the ai utopia only works if…
ytc_UgwAG44Pz…
G
@Exploding_Pencils Yeah no sh!t, she has 2M subscribers ofc she makes money, but…
ytr_UgziXFG0N…
Comment
This man is so right if we’ve never had to face or deal with anything smarter than us. When it does become smarter than even the smartest people on earth and who created it then how do you stop it if by then with it being smarter than even the smartest people and those who created it how would it be stopped if it’d be smarter than them? Obviously it’d figure out by then how not to be stopped. Also like he stated if we have to worry about how people try to weaponize A.I. what happens if an Individual just hates people in general for whatever reason and their whole purpose of A.i. is to eliminate the human race what do we do then?
youtube
AI Governance
2025-06-17T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxBtW7mNRWkpbcW7lR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwWhxlkpzIsjgpZ5q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_yqykxVMNjSwWmF94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYDmWjChoz_d9xM8d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwH7h9s-aQ-KUXe9Q14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMUNWWnjpaBQDZSD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz5I9LKoArUqGQmCvd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz7jW5HsbHHnMCtDKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfTdq2_ghQS5nEZst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxfk2P1Tci6fhEvmWV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]