Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If Robert Oppenheimer, and other brilliant physicists of that time, never allow…
ytc_UgwkeUxzQ…
G
They didnt know it was bias. The bias is very well hidden and subtle, not someth…
ytr_Ugw1tEgHz…
G
I found the conversation about AI sentience to be thought-provoking. While I'm n…
ytc_UgxMGCRie…
G
My headcannon is that this is an AI trying to make people replace real art with …
ytc_UgznmxFqY…
G
3:19 "I can pretty confidently say at this point that AI is better at drawing th…
ytc_UgzytacMf…
G
Yeah 7 to 10 years Siri will be smart enough to manage a bit of your groceries w…
ytc_Ugy2Hm-5P…
G
AI is merely a tool and depending on how people use that tool could be for good …
ytc_Ugz2ZbbmZ…
G
@HevaNaisdey Who says you have to simply sit around in your retirement? I will b…
ytr_Ugwrv-80O…
Comment
I'm personally not that worried about it, but caution is still advisable.
1) we're not that close to a real AI. For instance GPT 4 is very impressive but it's just a linguistic bot, it's not a genuine AI. It's not actually thinking or perceiving.
2) The storage needs, energy needs, cooling needs, and maintenance needs for an AI will be quite extensive.
3) because of these needs any AI we create will be extremely fragile, vulnerable, and reliant upon humans to maintain its existence.
youtube
AI Governance
2023-04-19T11:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpESE2NiPcWRtLkHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywWb9b3XvXm7Sf0iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxxmjpGb3AezvaVNzl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJcsY5ewsABmgh6wV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws5fNn_M_0i8wMrop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_87XmoXnYAy9ZR6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz21tMrGKtY57o93tR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYBGKjAr6akRbtJ1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ1OTL1d0Lp-T3dmN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvyRIPWSk1qxHXX9R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]