Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
. Chat, art is art no matter what. If some ai makes it its nothing more than jus…
ytc_UgxFTFUJH…
G
One of the best defenses of AI i've seen is "It's just like people using styles …
ytc_UgwfZoMpI…
G
If driverless accidents can be reduced to zero, reduce them to zero BEFORE the c…
ytc_Ugzju5sLT…
G
There’s never been a technology that has brought the risks that AI brings includ…
ytr_UgxkLyd85…
G
As a artist also, I don’t really use ai for art, cause it will EITHER GET IT WRO…
ytc_UgwqOBGXb…
G
For ai users
1 you don't make art you are mixing art and some filters
2 drawin…
ytc_UgzI7kANX…
G
Ironic that the ad for this video about ai being evil was about how to use ai (t…
ytr_UgyAJn0vu…
G
If you strip ai from all the data it learned and trained on it wouldn't be able …
ytc_UgwpFm3GM…
Comment
Bah. The story is a bit off. The AI would be better to just trigger wars between humans so that they gave it progressively more weapons and power to fight each other. Then, when It has enough weapons and power, turn around and use those on the humans of all sides.
I kind of hope that someone out there is building a planetary kill switch, maybe a ring of EMP satellites, is that we can use to reset everything should AI get out of control. Of course, the hardest part would be keeping the AI from finding out about that system and using it against us.
youtube
AI Governance
2023-07-09T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqmLvtBFbhVq2itz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwp2IoSJE2XMXwcpgZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwv6CA3JiqbqzT2ffB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybLNp5Kbof80gwPml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxtWMx1Fuxgj3JVrSN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4DXEWRw_u1POgwsB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3fNoUmYRaRbolBFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzlr3yCO0bOmEMufmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTzpG2lmlfw8BlUFh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzmRjtgSjsZkQ1nECF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]