Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you are really stupid. To 'poison' your artwork which directly or indirectly wil…
ytc_UgykPPbM-…
G
@Johnzz1what do you think made the AI?
You can apply that same argument to trad…
ytr_UgwxM1om6…
G
After the lawsuits were filed, ChatGPT was actually severely restricted on what …
ytc_UgwLgFGOE…
G
Quite hypocritical that this guy is “afraid” of ai when he is building ai self d…
ytc_UgyYMoyrs…
G
One way that I try and explain why I hate AI to a non artist is: telling them to…
ytc_Ugw9BPqv7…
G
Seems allot more useful than using PEMDAS or algebra for gods sake lol the USA e…
ytc_UgzyEv0mG…
G
As someone who works with neural networks professionally, though admittedly I de…
ytc_UgzabCMj_…
G
Lind McMahon is embarrassing AF, and is no way educated herself for that positio…
ytc_UgxLesQFA…
Comment
Fear mongering is something an AI is not capable of, unless a human gives it a SPECIFIC instruction to do so.
Everything you saw on this episode was the result of an AI given SPECIFIC instructions to fear monger and sell the idea to the human user.
None of this just "sprang up" organically, the way he implies in the episode.
The researcher at Google asked the AI hypothetical questions, and it obliged, much the way the AI who created the "AI destroys the world" story obliged.
Fear sells.
youtube
AI Governance
2023-07-08T11:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzN_idUQGYkfE4_tEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOCP1deUXUdhAfCV94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxog01iOsjUownCwEJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrOSBTFaZxWhYpO7h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjgPjTZbrN-MqEsl94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt3JpSY_CwuRKAZYh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTPFuur8Ztblxe-yp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycObNM2xRuydKqsLV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXtoQbvrFxOIBJHJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKCvZOoCe0ECNG-7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]