Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get banned from YT for saying/warning exactly this (Content of this video).
Pe…
ytc_UgzuJRdhN…
G
100% I said the same thing, there is so much more to his story they didn’t talk …
ytr_UgxSxt8si…
G
I have a coworker who “writes” AI “music” and then tells people he wrote it.
He…
ytc_Ugzu7FNX-…
G
Sucks how victims are treated even by ppl who know the material’s fake. I’d be h…
ytc_UgyA3tHPp…
G
After complaining that the big AI huys don't stop, he gets the opportunity to pu…
ytc_UgzkRZK1X…
G
The current AI acts as a fiction generator. Fifty percent of American jobs are r…
ytc_UgzWK7Jw7…
G
OpenAI are only making it look like they want the government to legislate their …
ytc_UgxCgAxvv…
G
When this came out, I was like "bs. he's lying.", but ever since chatGPT was rel…
ytc_Ugx4bfx3E…
Comment
The danger, as with all things, is that this can be used for food for evil depending on the mind of the person with their fingers on the buttons! Without ethical oversight, we run into military applications, which can be good and evil. Then we have those whose "ethics" can be bought. As we see in the video, all thing have a price! We see AI being used to fleece people of BILLIONS of dollars every year by catfishing alone! And now the elderly are being targeted in increasing numbers by using AI voices to "bail out" relatives by unscrupulous scum. The military will ALWAYS be scrutinizing the tech world to see what can be used for their gain.
youtube
AI Governance
2024-04-27T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVUMFiYKwbtksry5d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzGqMJYT3hfbMJ9lRZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyl_FRbqiVis6KP_lZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxE357iUR-rd12yGWF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0FUS9mXv02HX7pY94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEumnxMprhcdVrd4N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugys2PvFxn61PPF4wJZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlOSKbpcU8PzjeDFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-iqGKZ9-kibEVsux4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgylVsMHuNNGVjKTPA14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]