Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe AI, in all its wisdom, will find a way to get humans to come together and …
ytc_UgwnuSL0w…
G
Where do you draw the line? Let's say a digital artist spends 100 hours working …
ytc_Ugxlv4eUV…
G
We need to develop a CEO AI and charge companies 1 million per year to use it. E…
ytc_UgxYlZ5ZR…
G
How long ai does the coding as per our needs? Need this tech asap as it will hel…
ytc_Ugx5DiDfe…
G
I don't think it's a good idea of using AI at all. They are like forcing student…
ytc_Ugy9YgIo0…
G
Same. the point isn't a Ai narration being good or bad. It's no one bothered to …
ytr_UgzVIXVtb…
G
Small online business people. They are calling them virtual agents and there are…
ytr_UgybNF-59…
G
My mom is teaching a similar level/situation and it is hugely frustrating and ti…
rdc_nu16jn7
Comment
I LOVE all of the "experts" in the comments saying "this isn't AI, these warnings are blown out of proportion, I work in AI and this is all wrong, this is bs, that's not what AI is" and then insulting people in the comments.
Dude, shut up. You want to direct me to YOUR channel, where you prove who you are, and debate these things, PLEASE do. Until then, I'll take the word of "the godfather of AI" over some nameless creep in the comments, who is probably just AI trying to trick us into a false sense of security anyway (that's a joke, I know you AI don't understand humor... Lol)
Honestly though, put up or shut up, a LOT of very smart, and very prominent peoplre are saying there is a thereat. Who are you, and why should I care. I know who they are, and theyve convinced me to care.
youtube
AI Governance
2023-09-26T02:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwk2FTyqBcLfabpEkh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4Fk1dd8nAUQt3k8h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEtmrC_SHQ0fzhAdR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4UhnaSGQMJQQMZK54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxghMK6JSZruQuQ6Zd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyG2Oaz707SVQtKh014AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwz-k7hD33RuiCmFNV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxDy4TCe70LDLOqFzl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwjy4bUxJRRs40KNrJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9KiJS2PRcVwxPcP94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]